Skip to content

Pylance errors #4500

@SlimRG

Description

@SlimRG

Connected

microsoft/pylance-release#7375

Description

PyLance doesn't see functions and classes

My code:

# CUDA
import pycuda.driver as cuda
import pycuda.autoinit  # Автоинициализация CUDA

# TensorRT
import tensorrt as trt

...

class TrtTextEncoder:
    """
    Универсальная обёртка для text encoder в TensorRT 10+.

    Аргументы:
        engine_path (str): путь к .trt файлу
        model_type (str): "clip" или "t5" — режим работы (для post-processing)
        dtype (torch.dtype): dtype PyTorch для вывода (обычно torch.float16/torch.float32)
        device (torch.device): устройство, куда возвращать тензоры (cpu или cuda)
    """

    def __init__(self,
                 engine_path: str,
                 model_type: str = "clip",
                 dtype: torch.dtype = torch.float16,
                 device: torch.device = torch.device("cpu")):

        assert model_type in ("clip", "t5"), "model_type must be 'clip' or 't5'"
        self.model_type = model_type
        self.dtype = dtype
        self.device = device

        # Инициализируем TensorRT runtime и десериализуем движок
        self.logger = trt.Logger(trt.Logger.WARNING)
        runtime = trt.Runtime(self.logger)
        with open(engine_path, "rb") as f:
            engine_bytes = f.read()
        self.engine = runtime.deserialize_cuda_engine(engine_bytes)
        if self.engine is None:
            raise RuntimeError(f"Failed to load TensorRT engine from {engine_path}")

        # Создаём execution context
        self.context = self.engine.create_execution_context()

        # Собираем все имена входных/выходных тензоров
        self.tensor_names = [
            self.engine.get_tensor_name(i)
            for i in range(self.engine.num_io_tensors())
        ]  # :contentReference[oaicite:4]{index=4}

        # Подготавливаем буферы и список указателей (bindings)
        self._allocate_buffers()

    def _allocate_buffers(self):
        """
        Создаёт host- и device-буферы для всех тензоров,
        а также список device-указателей в порядке engine.get_tensor_name(i).
        """
        self.h_buffers = {}
        self.d_buffers = {}
        self.bindings = []

        for name in self.tensor_names:
            # Получаем форму: кортеж int
            shape = tuple(self.engine.get_tensor_shape(name))
            # Выбираем numpy-dtype
            mode = self.engine.get_tensor_mode(name)
            if mode == trt.TensorIOMode.INPUT:
                dtype_np = np.int32
            else:
                dtype_np = np.float16 if self.dtype == torch.float16 else np.float32

            # Хост-буфер и его размер
            host_mem = np.zeros(shape, dtype=dtype_np)
            dev_mem = cuda.mem_alloc(host_mem.nbytes)

            self.h_buffers[name] = host_mem
            self.d_buffers[name] = dev_mem
            # сохраняем указатель на буфер
            self.bindings.append(int(dev_mem))

Environment

TensorRT Version: tensorrt-10.12.0.36-cp312-none-win_amd64.whl

NVIDIA GPU: RTX 4090

NVIDIA Driver Version: 576.80 Windows

CUDA Version: 12.8

CUDNN Version: 9.10.1.4_cuda12

Operating System: Windows 11 + VS Code

Python Version (if applicable): 3.12

PyTorch Version (if applicable): 2.7.0

Baremetal or Container (if so, version): Baremetal

Screens

Image

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions