-
Notifications
You must be signed in to change notification settings - Fork 15.2k
[CIR] Implement__builtin_va_arg #153834
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CIR] Implement__builtin_va_arg #153834
Conversation
|
@llvm/pr-subscribers-clangir @llvm/pr-subscribers-clang Author: Morris Hafner (mmha) ChangesPart of #153286. This patch adds support for __builtin_va_arg by adding the cir.va.arg operator. Unlike the incubator it doesn't depend on any target specific lowering (yet) but maps to llvm.va_arg. Full diff: https://github.com/llvm/llvm-project/pull/153834.diff 9 Files Affected:
diff --git a/clang/include/clang/CIR/Dialect/IR/CIROps.td b/clang/include/clang/CIR/Dialect/IR/CIROps.td
index a77e9199cdc96..34d716a6b963c 100644
--- a/clang/include/clang/CIR/Dialect/IR/CIROps.td
+++ b/clang/include/clang/CIR/Dialect/IR/CIROps.td
@@ -3415,4 +3415,126 @@ def CIR_FAbsOp : CIR_UnaryFPToFPBuiltinOp<"fabs", "FAbsOp"> {
}];
}
+//===----------------------------------------------------------------------===//
+// Variadic Operations
+//===----------------------------------------------------------------------===//
+
+def CIR_VAStartOp : CIR_Op<"va.start"> {
+ let summary = "Starts a variable argument list";
+ let description = [{
+ The cir.va.start operation models the C/C++ va_start macro by
+ initializing a variable argument list at the given va_list storage
+ location.
+
+ The operand must be a pointer to the target's `va_list` representation.
+ This operation has no results and produces its effect by mutating the
+ storage referenced by the pointer operand.
+
+ Each `cir.va.start` must be paired with a corresponding `cir.va.end`
+ on the same logical `va_list` object along all control-flow paths. After
+ `cir.va.end`, the `va_list` must not be accessed unless reinitialized
+ with another `cir.va.start`.
+
+ Lowering typically maps this to the LLVM intrinsic `llvm.va_start`,
+ passing the appropriately decayed pointer to the underlying `va_list`
+ storage.
+
+ Example:
+
+ ```mlir
+ // %args : !cir.ptr<!cir.array<!rec___va_list_tag x 1>>
+ %p = cir.cast(array_to_ptrdecay, %args
+ : !cir.ptr<!cir.array<!rec___va_list_tag x 1>>),
+ !cir.ptr<!rec___va_list_tag>
+ cir.va.start %p : !cir.ptr<!rec___va_list_tag>
+ ```
+ }];
+ let arguments = (ins CIR_PointerType:$arg_list);
+
+ let assemblyFormat = [{
+ $arg_list attr-dict `:` type(operands)
+ }];
+}
+
+def CIR_VAEndOp : CIR_Op<"va.end"> {
+ let summary = "Ends a variable argument list";
+ let description = [{
+ The `cir.va.end` operation models the C/C++ va_end macro by finalizing
+ and cleaning up a variable argument list previously initialized with
+ `cir.va.start`.
+
+ The operand must be a pointer to the target's `va_list` representation.
+ This operation has no results and produces its effect by mutating the
+ storage referenced by the pointer operand.
+
+ `cir.va.end` must only be called after a matching `cir.va.start` on the
+ same `va_list` along all control-flow paths. After `cir.va.end`, the
+ `va_list` is invalid and must not be accessed unless reinitialized.
+
+ Lowering typically maps this to the LLVM intrinsic `llvm.va_end`,
+ passing the appropriately decayed pointer to the underlying `va_list`
+ storage.
+
+ Example:
+ ```mlir
+ // %args : !cir.ptr<!cir.array<!rec___va_list_tag x 1>>
+ %p = cir.cast(array_to_ptrdecay, %args
+ : !cir.ptr<!cir.array<!rec___va_list_tag x 1>>),
+ !cir.ptr<!rec___va_list_tag>
+ cir.va.end %p : !cir.ptr<!rec___va_list_tag>
+ ```
+ }];
+
+ let arguments = (ins CIR_PointerType:$arg_list);
+
+ let assemblyFormat = [{
+ $arg_list attr-dict `:` type(operands)
+ }];
+}
+
+def CIR_VAArgOp : CIR_Op<"va.arg"> {
+ let summary = "Fetches next variadic element as a given type";
+ let description = [{
+ The `cir.va.arg` operation models the C/C++ `va_arg` macro by reading the
+ next argument from an active variable argument list and producing it as a
+ value of a specified result type.
+
+ The operand must be a pointer to the target's `va_list` representation.
+ The operation advances the `va_list` state as a side effect and returns
+ the fetched value as the result, whose type is chosen by the user of the
+ operation.
+
+ A `cir.va.arg` must only be used on a `va_list` that has been initialized
+ with `cir.va.start` and not yet finalized by `cir.va.end`. The semantics
+ (including alignment and promotion rules) follow the platform ABI; the
+ frontend is responsible for providing a `va_list` pointer that matches the
+ target representation.
+
+ Unless replaced by LoweringPrepare for the chosen target ABI this maps to
+ LLVM's `va_arg` instruction, yielding a value of the requested result type
+ and updating the underlying `va_list` pointer.
+
+ Example:
+ ```mlir
+ // %args : !cir.ptr<!cir.array<!rec___va_list_tag x 1>>
+ %p = cir.cast(array_to_ptrdecay, %args
+ : !cir.ptr<!cir.array<!rec___va_list_tag x 1>>),
+ !cir.ptr<!rec___va_list_tag>
+ cir.va.start %p : !cir.ptr<!rec___va_list_tag>
+
+ // Fetch an `int` from the vararg list.
+ %v = cir.va.arg %p : (!cir.ptr<!rec___va_list_tag>) -> !s32i
+
+ cir.va.end %p : !cir.ptr<!rec___va_list_tag>
+ ```
+ }];
+
+ let arguments = (ins CIR_PointerType:$arg_list);
+ let results = (outs CIR_AnyType:$result);
+
+ let assemblyFormat = [{
+ $arg_list attr-dict `:` functional-type(operands, $result)
+ }];
+}
+
#endif // CLANG_CIR_DIALECT_IR_CIROPS_TD
diff --git a/clang/lib/CIR/CodeGen/CIRGenBuiltin.cpp b/clang/lib/CIR/CodeGen/CIRGenBuiltin.cpp
index 36aea4c1d39ce..391e2e12947ce 100644
--- a/clang/lib/CIR/CodeGen/CIRGenBuiltin.cpp
+++ b/clang/lib/CIR/CodeGen/CIRGenBuiltin.cpp
@@ -125,6 +125,18 @@ RValue CIRGenFunction::emitBuiltinExpr(const GlobalDecl &gd, unsigned builtinID,
default:
break;
+ // C stdarg builtins.
+ case Builtin::BI__builtin_stdarg_start:
+ case Builtin::BI__builtin_va_start:
+ case Builtin::BI__va_start:
+ case Builtin::BI__builtin_va_end: {
+ emitVAStartEnd(builtinID == Builtin::BI__va_start
+ ? emitScalarExpr(e->getArg(0))
+ : emitVAListRef(e->getArg(0)).getPointer(),
+ builtinID != Builtin::BI__builtin_va_end);
+ return {};
+ }
+
case Builtin::BIfabs:
case Builtin::BIfabsf:
case Builtin::BIfabsl:
@@ -361,3 +373,22 @@ mlir::Value CIRGenFunction::emitCheckedArgForAssume(const Expr *e) {
"emitCheckedArgForAssume: sanitizers are NYI");
return {};
}
+
+void CIRGenFunction::emitVAStartEnd(mlir::Value argValue, bool isStart) {
+ // LLVM codegen casts to *i8, no real gain on doing this for CIRGen this
+ // early, defer to LLVM lowering.
+ if (isStart)
+ cir::VAStartOp::create(builder, argValue.getLoc(), argValue);
+ else
+ cir::VAEndOp::create(builder, argValue.getLoc(), argValue);
+}
+
+// FIXME(cir): This completely abstracts away the ABI with a generic CIR Op. We
+// need to decide how to handle va_arg target-specific codegen.
+mlir::Value CIRGenFunction::emitVAArg(VAArgExpr *ve, Address &vaListAddr) {
+ assert(!cir::MissingFeatures::msabi());
+ mlir::Location loc = cgm.getLoc(ve->getExprLoc());
+ mlir::Type type = convertType(ve->getType());
+ mlir::Value vaList = emitVAListRef(ve->getSubExpr()).getPointer();
+ return cir::VAArgOp::create(builder, loc, type, vaList);
+}
diff --git a/clang/lib/CIR/CodeGen/CIRGenExpr.cpp b/clang/lib/CIR/CodeGen/CIRGenExpr.cpp
index 8bcca6f5d1803..00f2a64281b8c 100644
--- a/clang/lib/CIR/CodeGen/CIRGenExpr.cpp
+++ b/clang/lib/CIR/CodeGen/CIRGenExpr.cpp
@@ -90,11 +90,8 @@ Address CIRGenFunction::emitPointerWithAlignment(const Expr *expr,
} break;
// Array-to-pointer decay. TODO(cir): BaseInfo and TBAAInfo.
- case CK_ArrayToPointerDecay: {
- cgm.errorNYI(expr->getSourceRange(),
- "emitPointerWithAlignment: array-to-pointer decay");
- return Address::invalid();
- }
+ case CK_ArrayToPointerDecay:
+ return emitArrayToPointerDecay(ce->getSubExpr());
case CK_UncheckedDerivedToBase:
case CK_DerivedToBase: {
diff --git a/clang/lib/CIR/CodeGen/CIRGenExprScalar.cpp b/clang/lib/CIR/CodeGen/CIRGenExprScalar.cpp
index 8649bab91ce8e..7fb5c020dd5f7 100644
--- a/clang/lib/CIR/CodeGen/CIRGenExprScalar.cpp
+++ b/clang/lib/CIR/CodeGen/CIRGenExprScalar.cpp
@@ -384,6 +384,19 @@ class ScalarExprEmitter : public StmtVisitor<ScalarExprEmitter, mlir::Value> {
return Visit(e->getReplacement());
}
+ mlir::Value VisitVAArgExpr(VAArgExpr *ve) {
+ QualType Ty = ve->getType();
+
+ if (Ty->isVariablyModifiedType()) {
+ cgf.cgm.errorNYI(ve->getSourceRange(), "variably modified types in varargs");
+ }
+
+ Address argValue = Address::invalid();
+ mlir::Value val = cgf.emitVAArg(ve, argValue);
+
+ return val;
+ }
+
mlir::Value VisitUnaryExprOrTypeTraitExpr(const UnaryExprOrTypeTraitExpr *e);
mlir::Value
VisitAbstractConditionalOperator(const AbstractConditionalOperator *e);
diff --git a/clang/lib/CIR/CodeGen/CIRGenFunction.cpp b/clang/lib/CIR/CodeGen/CIRGenFunction.cpp
index d6a0792292604..917afa8e78021 100644
--- a/clang/lib/CIR/CodeGen/CIRGenFunction.cpp
+++ b/clang/lib/CIR/CodeGen/CIRGenFunction.cpp
@@ -1080,4 +1080,10 @@ void CIRGenFunction::emitVariablyModifiedType(QualType type) {
} while (type->isVariablyModifiedType());
}
+Address CIRGenFunction::emitVAListRef(const Expr *e) {
+ if (getContext().getBuiltinVaListType()->isArrayType())
+ return emitPointerWithAlignment(e);
+ return emitLValue(e).getAddress();
+}
+
} // namespace clang::CIRGen
diff --git a/clang/lib/CIR/CodeGen/CIRGenFunction.h b/clang/lib/CIR/CodeGen/CIRGenFunction.h
index 9a887ec047f86..5bd59e6a84813 100644
--- a/clang/lib/CIR/CodeGen/CIRGenFunction.h
+++ b/clang/lib/CIR/CodeGen/CIRGenFunction.h
@@ -1411,6 +1411,30 @@ class CIRGenFunction : public CIRGenTypeCache {
const clang::Stmt *thenS,
const clang::Stmt *elseS);
+ /// Build a "reference" to a va_list; this is either the address or the value
+ /// of the expression, depending on how va_list is defined.
+ Address emitVAListRef(const Expr *e);
+
+ /// Emits a CIR variable-argument operation, either
+ /// \c cir.va.start or \c cir.va.end.
+ ///
+ /// \param argValue A reference to the \c va_list as emitted by either
+ /// \c emitVAListRef or \c emitMSVAListRef.
+ ///
+ /// \param isStart If \c true, emits \c cir.va.start, otherwise \c cir.va.end.
+ void emitVAStartEnd(mlir::Value argValue, bool isStart);
+
+ /// Generate code to get an argument from the passed in pointer
+ /// and update it accordingly.
+ ///
+ /// \param ve The \c VAArgExpr for which to generate code.
+ ///
+ /// \param vaListAddr Receives a reference to the \c va_list as emitted by
+ /// either \c emitVAListRef or \c emitMSVAListRef.
+ ///
+ /// \returns SSA value with the argument.
+ mlir::Value emitVAArg(VAArgExpr *ve, Address &vaListAddr);
+
/// ----------------------
/// CIR build helpers
/// -----------------
diff --git a/clang/lib/CIR/Lowering/DirectToLLVM/LowerToLLVM.cpp b/clang/lib/CIR/Lowering/DirectToLLVM/LowerToLLVM.cpp
index 1ea296a6887ef..45fb30128c50e 100644
--- a/clang/lib/CIR/Lowering/DirectToLLVM/LowerToLLVM.cpp
+++ b/clang/lib/CIR/Lowering/DirectToLLVM/LowerToLLVM.cpp
@@ -2336,6 +2336,9 @@ void ConvertCIRToLLVMPass::runOnOperation() {
CIRToLLVMTrapOpLowering,
CIRToLLVMUnaryOpLowering,
CIRToLLVMUnreachableOpLowering,
+ CIRToLLVMVAArgOpLowering,
+ CIRToLLVMVAEndOpLowering,
+ CIRToLLVMVAStartOpLowering,
CIRToLLVMVecCmpOpLowering,
CIRToLLVMVecCreateOpLowering,
CIRToLLVMVecExtractOpLowering,
@@ -3035,6 +3038,42 @@ mlir::LogicalResult CIRToLLVMInlineAsmOpLowering::matchAndRewrite(
return mlir::success();
}
+mlir::LogicalResult CIRToLLVMVAStartOpLowering::matchAndRewrite(
+ cir::VAStartOp op, OpAdaptor adaptor,
+ mlir::ConversionPatternRewriter &rewriter) const {
+ auto opaquePtr = mlir::LLVM::LLVMPointerType::get(getContext());
+ auto vaList = mlir::LLVM::BitcastOp::create(rewriter, op.getLoc(), opaquePtr,
+ adaptor.getArgList());
+ rewriter.replaceOpWithNewOp<mlir::LLVM::VaStartOp>(op, vaList);
+ return mlir::success();
+}
+
+mlir::LogicalResult CIRToLLVMVAEndOpLowering::matchAndRewrite(
+ cir::VAEndOp op, OpAdaptor adaptor,
+ mlir::ConversionPatternRewriter &rewriter) const {
+ auto opaquePtr = mlir::LLVM::LLVMPointerType::get(getContext());
+ auto vaList = mlir::LLVM::BitcastOp::create(rewriter, op.getLoc(), opaquePtr,
+ adaptor.getArgList());
+ rewriter.replaceOpWithNewOp<mlir::LLVM::VaEndOp>(op, vaList);
+ return mlir::success();
+}
+
+mlir::LogicalResult CIRToLLVMVAArgOpLowering::matchAndRewrite(
+ cir::VAArgOp op, OpAdaptor adaptor,
+ mlir::ConversionPatternRewriter &rewriter) const {
+ auto opaquePtr = mlir::LLVM::LLVMPointerType::get(getContext());
+ auto vaList = mlir::LLVM::BitcastOp::create(rewriter, op.getLoc(), opaquePtr,
+ adaptor.getArgList());
+
+ mlir::Type llvmType =
+ getTypeConverter()->convertType(op->getResultTypes().front());
+ if (!llvmType)
+ return mlir::failure();
+
+ rewriter.replaceOpWithNewOp<mlir::LLVM::VaArgOp>(op, llvmType, vaList);
+ return mlir::success();
+}
+
std::unique_ptr<mlir::Pass> createConvertCIRToLLVMPass() {
return std::make_unique<ConvertCIRToLLVMPass>();
}
diff --git a/clang/lib/CIR/Lowering/DirectToLLVM/LowerToLLVM.h b/clang/lib/CIR/Lowering/DirectToLLVM/LowerToLLVM.h
index e32bf2d1bae0c..5263ff9910cbe 100644
--- a/clang/lib/CIR/Lowering/DirectToLLVM/LowerToLLVM.h
+++ b/clang/lib/CIR/Lowering/DirectToLLVM/LowerToLLVM.h
@@ -684,6 +684,36 @@ class CIRToLLVMInlineAsmOpLowering
mlir::ConversionPatternRewriter &) const override;
};
+class CIRToLLVMVAStartOpLowering
+ : public mlir::OpConversionPattern<cir::VAStartOp> {
+public:
+ using mlir::OpConversionPattern<cir::VAStartOp>::OpConversionPattern;
+
+ mlir::LogicalResult
+ matchAndRewrite(cir::VAStartOp op, OpAdaptor,
+ mlir::ConversionPatternRewriter &) const override;
+};
+
+class CIRToLLVMVAEndOpLowering
+ : public mlir::OpConversionPattern<cir::VAEndOp> {
+public:
+ using mlir::OpConversionPattern<cir::VAEndOp>::OpConversionPattern;
+
+ mlir::LogicalResult
+ matchAndRewrite(cir::VAEndOp op, OpAdaptor,
+ mlir::ConversionPatternRewriter &) const override;
+};
+
+class CIRToLLVMVAArgOpLowering
+ : public mlir::OpConversionPattern<cir::VAArgOp> {
+public:
+ using mlir::OpConversionPattern<cir::VAArgOp>::OpConversionPattern;
+
+ mlir::LogicalResult
+ matchAndRewrite(cir::VAArgOp op, OpAdaptor,
+ mlir::ConversionPatternRewriter &) const override;
+};
+
} // namespace direct
} // namespace cir
diff --git a/clang/test/CIR/CodeGen/var_arg.c b/clang/test/CIR/CodeGen/var_arg.c
new file mode 100644
index 0000000000000..647e8f15fa27f
--- /dev/null
+++ b/clang/test/CIR/CodeGen/var_arg.c
@@ -0,0 +1,89 @@
+// RUN: %clang_cc1 -triple x86_64-unknown-linux-gnu -Wno-unused-value -fclangir -emit-cir %s -o %t.cir
+// RUN: FileCheck --input-file=%t.cir %s -check-prefix=CIR
+// RUN: %clang_cc1 -triple x86_64-unknown-linux-gnu -Wno-unused-value -fclangir -emit-llvm %s -o %t-cir.ll
+// RUN: FileCheck --input-file=%t-cir.ll %s -check-prefix=LLVM
+// RUN: %clang_cc1 -triple x86_64-unknown-linux-gnu -Wno-unused-value -emit-llvm %s -o %t.ll
+// RUN: FileCheck --input-file=%t.ll %s -check-prefix=OGCG
+
+int varargs(int count, ...) {
+ __builtin_va_list args;
+ __builtin_va_start(args, count);
+ int res = __builtin_va_arg(args, int);
+ __builtin_va_end(args);
+ return res;
+}
+
+// CIR: !rec___va_list_tag = !cir.record<struct "__va_list_tag" {!u32i, !u32i, !cir.ptr<!void>, !cir.ptr<!void>}
+
+// CIR: cir.func dso_local @varargs(%[[COUNT:.+]]: !s32i {{.*}}, ...) -> !s32i
+// CIR: %[[COUNT_ADDR:.+]] = cir.alloca !s32i, !cir.ptr<!s32i>, ["count", init]
+// CIR: %[[RETVAL:.+]] = cir.alloca !s32i, !cir.ptr<!s32i>, ["__retval"]
+// CIR: %[[ARGS:.+]] = cir.alloca !cir.array<!rec___va_list_tag x 1>, !cir.ptr<!cir.array<!rec___va_list_tag x 1>>, ["args"]
+// CIR: %[[RES:.+]] = cir.alloca !s32i, !cir.ptr<!s32i>, ["res", init]
+// CIR: cir.store %[[COUNT]], %[[COUNT_ADDR]] : !s32i, !cir.ptr<!s32i>
+// CIR: %[[ARGS_DECAY1:.+]] = cir.cast(array_to_ptrdecay, %[[ARGS]] : !cir.ptr<!cir.array<!rec___va_list_tag x 1>>), !cir.ptr<!rec___va_list_tag>
+// CIR: cir.va.start %[[ARGS_DECAY1]] : !cir.ptr<!rec___va_list_tag>
+// CIR: %[[ARGS_DECAY2:.+]] = cir.cast(array_to_ptrdecay, %[[ARGS]] : !cir.ptr<!cir.array<!rec___va_list_tag x 1>>), !cir.ptr<!rec___va_list_tag>
+// CIR: %[[ARGVAL:.+]] = cir.va.arg %[[ARGS_DECAY2]] : (!cir.ptr<!rec___va_list_tag>) -> !s32i
+// CIR: cir.store {{.*}} %[[ARGVAL]], %[[RES]] : !s32i, !cir.ptr<!s32i>
+// CIR: %[[ARGS_DECAY3:.+]] = cir.cast(array_to_ptrdecay, %[[ARGS]] : !cir.ptr<!cir.array<!rec___va_list_tag x 1>>), !cir.ptr<!rec___va_list_tag>
+// CIR: cir.va.end %[[ARGS_DECAY3]] : !cir.ptr<!rec___va_list_tag>
+// CIR: %[[RES_VAL:.+]] = cir.load {{.*}} %[[RES]] : !cir.ptr<!s32i>, !s32i
+// CIR: cir.store %[[RES_VAL]], %[[RETVAL]] : !s32i, !cir.ptr<!s32i>
+// CIR: %[[RET:.+]] = cir.load %[[RETVAL]] : !cir.ptr<!s32i>, !s32i
+// CIR: cir.return %[[RET]] : !s32i
+
+// LLVM: %struct.__va_list_tag = type { i32, i32, ptr, ptr }
+
+// LLVM: define dso_local i32 @varargs(i32 %[[ARG0:.+]], ...)
+// LLVM: %[[COUNT_ADDR:.+]] = alloca i32, i64 1
+// LLVM: %[[RET_SLOT:.+]] = alloca i32, i64 1
+// LLVM: %[[VASTORAGE:.+]] = alloca [1 x %struct.__va_list_tag], i64 1
+// LLVM: %[[RES:.+]] = alloca i32, i64 1
+// LLVM: store i32 %[[ARG0]], ptr %[[COUNT_ADDR]]
+// LLVM: %[[G1:.+]] = getelementptr %struct.__va_list_tag, ptr %[[VASTORAGE]], i32 0
+// LLVM: call void @llvm.va_start.p0(ptr %[[G1]])
+// LLVM: %[[G2:.+]] = getelementptr %struct.__va_list_tag, ptr %[[VASTORAGE]], i32 0
+// LLVM: %[[NEXT:.+]] = va_arg ptr %[[G2]], i32
+// LLVM: store i32 %[[NEXT]], ptr %[[RES]]
+// LLVM: %[[G3:.+]] = getelementptr %struct.__va_list_tag, ptr %[[VASTORAGE]], i32 0
+// LLVM: call void @llvm.va_end.p0(ptr %[[G3]])
+// LLVM: %[[RVAL:.+]] = load i32, ptr %[[RES]]
+// LLVM: store i32 %[[RVAL]], ptr %[[RET_SLOT]]
+// LLVM: %[[RET:.+]] = load i32, ptr %[[RET_SLOT]]
+// LLVM: ret i32 %[[RET]]
+
+// OGCG: %struct.__va_list_tag = type { i32, i32, ptr, ptr }
+
+// OGCG: define dso_local i32 @varargs(i32 noundef %[[COUNT:.+]], ...)
+// OGCG: %[[COUNT_ADDR:.+]] = alloca i32
+// OGCG: %[[ARGS:.+]] = alloca [1 x %struct.__va_list_tag]
+// OGCG: %[[RES:.+]] = alloca i32
+// OGCG: store i32 %[[COUNT]], ptr %[[COUNT_ADDR]]
+// OGCG: %[[DEC1:.+]] = getelementptr inbounds [1 x %struct.__va_list_tag], ptr %[[ARGS]], i64 0, i64 0
+// OGCG: call void @llvm.va_start.p0(ptr %[[DEC1]])
+// OGCG: %[[DEC2:.+]] = getelementptr inbounds [1 x %struct.__va_list_tag], ptr %[[ARGS]], i64 0, i64 0
+// OGCG: {{.*}} = getelementptr inbounds nuw %struct.__va_list_tag, ptr %[[DEC2]], i32 0, i32 0
+// OGCG: {{.*}} = load i32, ptr {{.*}}
+// OGCG: br i1 {{.*}}, label %[[INREG:.+]], label %[[INMEM:.+]]
+// OGCG: [[INREG]]:
+// OGCG: {{.*}} = getelementptr inbounds nuw %struct.__va_list_tag, ptr %[[DEC2]], i32 0, i32 3
+// OGCG: {{.*}} = load ptr, ptr {{.*}}
+// OGCG: {{.*}} = getelementptr i8, ptr {{.*}}, i32 {{.*}}
+// OGCG: {{.*}} = add i32 {{.*}}, 8
+// OGCG: store i32 {{.*}}, ptr {{.*}}
+// OGCG: br label %[[END:.+]]
+// OGCG: [[INMEM]]:
+// OGCG: {{.*}} = getelementptr inbounds nuw %struct.__va_list_tag, ptr %[[DEC2]], i32 0, i32 2
+// OGCG: {{.*}} = load ptr, ptr {{.*}}
+// OGCG: {{.*}} = getelementptr i8, ptr {{.*}}, i32 8
+// OGCG: store ptr {{.*}}, ptr {{.*}}
+// OGCG: br label %[[END]]
+// OGCG: [[END]]:
+// OGCG: %[[ARGPTR:.+]] = phi ptr [ {{.*}}, %[[INREG]] ], [ {{.*}}, %[[INMEM]] ]
+// OGCG: %[[V:.+]] = load i32, ptr %[[ARGPTR]]
+// OGCG: store i32 %[[V]], ptr %[[RES]]
+// OGCG: %[[DEC3:.+]] = getelementptr inbounds [1 x %struct.__va_list_tag], ptr %[[ARGS]], i64 0, i64 0
+// OGCG: call void @llvm.va_end.p0(ptr %[[DEC3]])
+// OGCG: %[[RET:.+]] = load i32, ptr %[[RES]]
+// OGCG: ret i32 %[[RET]]
|
|
✅ With the latest revision this PR passed the C/C++ code formatter. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this comment doesn't belong. Completely abstracting away the ABI details is what we want in CIR. At most, this should be a comment saying that the ABI details will be handled in LoweringPrepare.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That comment still kind of applies because of the va_arg LLVM instruction and that we need to replace it in LoweringPrepare in the future. I'll clarify that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The vaListAddr parameter isn't used, and it doesn't look like it will be. That does raise a question though. Classic codegen uses this parameter when emitting aggregate or complex va args to detect when the ABI didn't support the type for the arg. We won't detect that until a later lowering phase. Is that a problem? I don't think we have access to the same diagnostic reporting mechanism at that point.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, If we want to defer codegen to LoweringPrepare, we can still have a function in ABIInfo that tells whether a given QualType can be a VAArg
For making the review process a bit more convenient I suggest you use some stacked approach, see https://llvm.org/docs/GitHub.html#stacked-pull-requests |
andykaylor
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
Part of llvm#153286. Depends on llvm#153819. This patch adds support for __builtin_va_arg by adding the cir.va.arg operator. Unlike the incubator it doesn't depend on any target specific lowering (yet) but maps to llvm.va_arg.
Part of #153286.
Depends on #153819.
This patch adds support for __builtin_va_arg by adding the cir.va.arg operator. Unlike the incubator it doesn't depend on any target specific lowering (yet) but maps to llvm.va_arg.