Skip to content

Conversion to torchscript or ONNX #24

@drewm1980

Description

@drewm1980

I'm working on optimizing my model inference, trying conversion to torchscript as a first step. When I call torch.jit.script() on my model, I hit:

name = '_weights_ranges', item = {('irrep_0,0', 'regular'): (0, 288)}

    def infer_type(name, item):
        # The forward function from Module is special; never use this annotations; we
        # need to infer type directly using JIT.  I originally wanted to write
        # this test as isinstance(class_annotations[name], Callable) but
        # isinstance on typing things doesn't seem to work: isinstance(list, Callable)
        # is also true!
        if name in class_annotations and class_annotations[name] != torch.nn.Module.__annotations__["forward"]:
            attr_type = torch.jit.annotations.ann_to_type(class_annotations[name], _jit_internal.fake_range())
        elif isinstance(item, torch.jit.Attribute):
            attr_type = torch.jit.annotations.ann_to_type(item.type, _jit_internal.fake_range())
        else:
>           attr_type = torch._C._jit_try_infer_type(item)
E           RuntimeError: Cannot create dict for key type '(str, str)', only int, float, Tensor and string keys are supported

This pytorch code resides here:
https://github.com/pytorch/pytorch/blob/22902b9242853a4ce319e7c5c4a1c94bc00ccb7a/torch/jit/_recursive.py#L126

Torch can't trace through a Dict[Tuple[str,str],_], which is used here:

coefficients = weights[self._weights_ranges[io_pair][0]:self._weights_ranges[io_pair][1]]

My goal is to get the model to run as fast as possible on NVIDIA hardware, probably using tensorrt. Is there another known-good conversion path?

The above error was with torch 1.6.0, e2cnn v0.1.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions