-
Notifications
You must be signed in to change notification settings - Fork 370
Open
Labels
bugSomething isn't workingSomething isn't working
Description
Bug Description
Serialization of Graph Break Fails
To Reproduce
Steps to reproduce the behavior:
import torch
import torch.nn as nn
import torch_tensorrt as torchtrt
import torchvision
torch.manual_seed(0)
torch.cuda.manual_seed_all(0)
model = torchvision.models.resnet50().eval().to("cuda")
input = [torch.randn((1, 3, 224, 224)).to("cuda")]
BATCH = torch.export.Dim("BATCH", min=1, max=16)
exp_program = torch.export.export(model, tuple(input))#, dynamic_shapes=({0: BATCH}, ))
trt_mod1 = trt_gm = torchtrt.dynamo.compile(
exp_program,
tuple(input),
# use_python_runtime=use_python_runtime,
enabled_precisions={torch.float},
min_block_size=1,
immutable_weights=False,
torch_executed_ops={torch.ops.aten.add.Tensor},
reuse_cached_engines=False,
)
serialized = torchtrt.save(trt_mod1, "/home/other/trt_mod1.pt2", inputs=input)
trt_mod_saved = torch.export.load("/home/other/trt_mod1.pt2").module()
Expected behavior
Environment
Build information about Torch-TensorRT can be found by turning on debug messages
- Torch-TensorRT Version (e.g. 1.0.0):
- PyTorch Version (e.g. 1.0):
- CPU Architecture:
- OS (e.g., Linux):
- How you installed PyTorch (
conda
,pip
,libtorch
, source): - Build command you used (if compiling from source):
- Are you using local sources or building from archives:
- Python version:
- CUDA version:
- GPU models and configuration:
- Any other relevant information:
Additional context
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working