Skip to content

Commit fa4b2db

Browse files
committed
chore(pooling|interpolate): More specific warnings when plugin is used
Signed-off-by: Naren Dasan <[email protected]> Signed-off-by: Naren Dasan <[email protected]>
1 parent d4fe8da commit fa4b2db

File tree

2 files changed

+5
-1
lines changed

2 files changed

+5
-1
lines changed

core/conversion/converters/impl/interpolate.cpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ void create_plugin(ConversionCtx* ctx, const torch::jit::Node* n, nvinfer1::ITen
2121
std::vector<int64_t> out_shape,
2222
std::vector<int64_t> out_size,
2323
std::string mode) {
24-
LOG_WARNING("Interpolation layer will be run through ATen, not TensorRT. Performance may differ.");
24+
LOG_WARNING("Interpolation layer will be run through ATen, not TensorRT. Performance may be lower than expected");
2525

2626
auto creator = new plugins::InterpolatePluginCreator();
2727
auto plugin = creator->createPlugin(name, in_shape, out_shape, out_size, mode, false);

core/conversion/converters/impl/pooling.cpp

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -279,7 +279,11 @@ auto pooling_registrations TRTORCH_UNUSED = RegisterNodeConversionPatterns()
279279
auto out_size = util::toVec(util::toDims(args[1].unwrapToIntList()));
280280

281281
if (ctx->input_is_dynamic) {
282+
#if NV_TENSORRT_MAJOR < 7 || (NV_TENSORRT_MAJOR == 7 && NV_TENSORRT_MINOR < 1)
283+
LOG_WARNING("Adaptive pooling layer will be run through ATen, via not TensorRT, performace will be lower than expected. Consider switching either to static input shape or moving to non adaptive pooling if this is an issue");
284+
#else
282285
LOG_WARNING("Adaptive pooling layer will be run through ATen (on CPU), via not TensorRT, performace will suffer. Consider switching either to static input shape or moving to non adaptive pooling");
286+
#endif
283287

284288
auto out_shape = in_shape;
285289
std::copy(out_size.begin(), out_size.end(), out_shape.begin() + (in_shape.size() - out_size.size()));

0 commit comments

Comments
 (0)