Skip to content

Google Colab notebook is not working  #31

@charlescho64

Description

@charlescho64

@madhawav
What is the result of checking Google Colab notebook?

In Google Colab notebook I checked the following way, but without success.

  1. python==3.7.13 torch==1.7.1+cu110 torchvision==0.8.2+cu110
    --> Failed building wheel for torch-scatter, torch-sparse, but Successfully built torch-geometric
    --> I can't proceed any further.

  2. python==3.6.9 torch==1.8.0+cu111 torchvision==0.9.0+cu111
    --> Successfully installed torch-scatter-2.0.9,Successfully installed torch-sparse-0.6.13,Successfully installed torch-geometric-2.0.4
    --> Fail to import torch_sparse,torch_geometric
    --> I can't proceed any further.

  3. python==3.7.13 torch==1.11.0+cu113 torchvision==0.12.0+cu113
    -->Successfully installed torch-scatter,torch_geometric,torch_sparse
    -->No module named 'torchvision.models.utils' in /plan2scene/code/src/plan2scene/texture_gen/nets/vgg.py. I got this error and fixed it with torch.hub because 'torchvision.models.utils' is deprecated in torch 1.11.0.
    --> When data is uploaded in"Task:Upload rectified surface crops extracted from photos." step, the photo_file_name directory is created under rectified_crops and copied. After moving the data to rectified_crops, you can see the texture in the step of "Task: Let's preview the data you have provided."
    --> "# Compute texture embeddings for observed surfaces (Code adapted from ./code/scripts/preprocessing/fill_room_embeddigs.py)" step have error below like.
    --> I can't proceed any further.

/usr/local/lib/python3.7/dist-packages/torch/functional.py:568: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at  ../aten/src/ATen/native/TensorShape.cpp:2228.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
[<ipython-input-31-42fcee0d7001>](https://localhost:8080/#) in <module>()
      4       for candidate_key, image_description in room.surface_textures[surface].items():
      5         image = image_description.image
----> 6         emb, loss = tg_predictor.predict_embs([image])
      7         room.surface_embeddings[surface][candidate_key] = emb
      8         room.surface_losses[surface][candidate_key] = loss

10 frames
[/content/plan2scene/code/src/plan2scene/texture_gen/predictor.py](https://localhost:8080/#) in predict_embs(self, sample_image_crops)
     81             predictor_result = self.predict(unsigned_images.to(self.conf.device),
     82                                             unsigned_hsv_images.to(self.conf.device),
---> 83                                             self.get_position(), combined_emb=None, train=False)
     84 
     85             # Compute loss between synthesized texture and conditioned image

[/content/plan2scene/code/src/plan2scene/texture_gen/predictor.py](https://localhost:8080/#) in predict(self, unsigned_images, unsigned_hsv_images, sample_pos, train, combined_emb)
    272             network_input, base_color = self._compute_network_input(unsigned_images, unsigned_hsv_images, additional_params)
    273             network_out, network_emb, substance_out = self.net(network_input, sample_pos.to(self.conf.device),
--> 274                                                                self.seed)
    275         else:
    276             # Predict using the combined_emb. Skip encoder.

[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
   1108         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1109                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110             return forward_call(*input, **kwargs)
   1111         # Do not call functions when jit is used
   1112         full_backward_hooks, non_full_backward_hooks = [], []

[/content/plan2scene/code/src/plan2scene/texture_gen/nets/neural_texture/texture_gen.py](https://localhost:8080/#) in forward(self, image_gt, position, seed, weights_bottleneck)
     87 
     88         input_mlp = torch.cat([z_encoding, noise], dim=1)
---> 89         image_out = self.decoder(input_mlp)
     90         image_out = torch.tanh(image_out)
     91 

[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
   1108         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1109                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110             return forward_call(*input, **kwargs)
   1111         # Do not call functions when jit is used
   1112         full_backward_hooks, non_full_backward_hooks = [], []

[/content/plan2scene/code/src/plan2scene/texture_gen/nets/neural_texture/mlp.py](https://localhost:8080/#) in forward(self, input)
     32     def forward(self, input):
     33 
---> 34         input_z = self.first_conv(input)
     35         output = input_z
     36         for idx, block in enumerate(self.res_blocks):

[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
   1108         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1109                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110             return forward_call(*input, **kwargs)
   1111         # Do not call functions when jit is used
   1112         full_backward_hooks, non_full_backward_hooks = [], []

[/content/plan2scene/code/src/plan2scene/texture_gen/nets/core_modules/standard_block.py](https://localhost:8080/#) in forward(self, input, style)
     67             output = self.norm(output, style)
     68         else:
---> 69             output = self.layer(input)
     70 
     71             # output = self.norm(output)

[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
   1108         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1109                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110             return forward_call(*input, **kwargs)
   1111         # Do not call functions when jit is used
   1112         full_backward_hooks, non_full_backward_hooks = [], []

[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py](https://localhost:8080/#) in forward(self, input)
    445 
    446     def forward(self, input: Tensor) -> Tensor:
--> 447         return self._conv_forward(input, self.weight, self.bias)
    448 
    449 class Conv3d(_ConvNd):

[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py](https://localhost:8080/#) in _conv_forward(self, input, weight, bias)
    442                             _pair(0), self.dilation, self.groups)
    443         return F.conv2d(input, weight, bias, self.stride,
--> 444                         self.padding, self.dilation, self.groups)
    445 
    446     def forward(self, input: Tensor) -> Tensor:

TypeError: conv2d() received an invalid combination of arguments - got (Tensor, Parameter, Parameter, tuple, tuple, tuple, int), but expected one of:
 * (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, tuple of ints padding, tuple of ints dilation, int groups)
      didn't match because some of the arguments have invalid types: (Tensor, !Parameter!, !Parameter!, !tuple!, !tuple!, !tuple!, int)
 * (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, str padding, tuple of ints dilation, int groups)
      didn't match because some of the arguments have invalid types: (Tensor, !Parameter!, !Parameter!, !tuple!, !tuple!, !tuple!, int)

Could you recheck colab notebook?

Originally posted by @charlescho64 in #28 (comment)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions