-
Notifications
You must be signed in to change notification settings - Fork 100
Description
After reproduced and looked into #73 , I have found the issue arises from a BoundReLU module with output shape [..., 50, 40] bound-backwarded with patches last_lA and last_uA with parameters stride=2, padding=3, height=25, width=20, kernel_size=[9,9] at bound_backward(.). As a result an image shaped[..., 50, 40], going through the stride=2, padding=3, kernel_size=[9,9] "convolution"(patch) operator, will therefore yield a new output shaped inplace_unfold function, inconsistent with the expected shape [25,20]. Thus the consequent shape-error occurs. The problem is due to incorrect parameters of lA on 13:23:45 line of the following log. Actually I thought the "kernel size" of lA on 13:23:45 line should be 7 instead of 9.
DEBUG 13:23:20 Bound backward from BoundBatchNormalization(/input.16) to bound BoundBatchNormalization(/input.16)
DEBUG 13:23:20 C: shape [64, 4, 25, 20, 64, 1, 1], type <class 'auto_LiRPA.patches.Patches'>
DEBUG 13:23:21 Bound backward to BoundBatchNormalization(name=/input.16, inputs=[/input.12, /8, /9, /10, /11], perturbed=True) (out shape torch.Size([4, 64, 25, 20]))
DEBUG 13:23:21 lA type <class 'auto_LiRPA.patches.Patches'> shape [64, 4, 25, 20, 64, 1, 1]
DEBUG 13:23:21 uA type <class 'auto_LiRPA.patches.Patches'> shape [64, 4, 25, 20, 64, 1, 1]
DEBUG 13:23:21 lA padding 0, stride 1, inserted_zeros 0
DEBUG 13:23:23 Bound backward to BoundConv(name=/input.12, inputs=[/input.8, /7], perturbed=True) (out shape torch.Size([4, 64, 25, 20]))
DEBUG 13:23:23 lA type <class 'auto_LiRPA.patches.Patches'> shape [64, 4, 25, 20, 64, 1, 1]
DEBUG 13:23:23 uA type <class 'auto_LiRPA.patches.Patches'> shape [64, 4, 25, 20, 64, 1, 1]
DEBUG 13:23:23 lA padding 0, stride 1, inserted_zeros 0
DEBUG 13:23:27 Bound backward to BoundAveragePool(name=/input.8, inputs=[/139], perturbed=True) (out shape torch.Size([4, 64, 25, 20]))
DEBUG 13:23:27 lA type <class 'auto_LiRPA.patches.Patches'> shape [64, 4, 25, 20, 64, 3, 3]
DEBUG 13:23:27 uA type <class 'auto_LiRPA.patches.Patches'> shape [64, 4, 25, 20, 64, 3, 3]
DEBUG 13:23:27 lA padding 1, stride 1, inserted_zeros 0
DEBUG 13:23:45 Bound backward to BoundRelu(name=/139, inputs=[/input.4], perturbed=True) (out shape torch.Size([4, 64, 50, 40]))
DEBUG 13:23:45 lA type <class 'auto_LiRPA.patches.Patches'> shape [64, 4, 25, 20, 64, 9, 9]
DEBUG 13:23:45 uA type <class 'auto_LiRPA.patches.Patches'> shape [64, 4, 25, 20, 64, 9, 9]
DEBUG 13:23:45 lA padding 3, stride 2, inserted_zeros 0
where the parameters are BoundConv=[striding=2, padding=1, kernel_size=3], BoundAveragePool=[striding=1, padding=1, kernel_size=3]. Think you are going to fuse two convolution-alike operators, then you actually get a new convolution operator with paremeters [string=2, padding=3, kernel_size=7] instead of kernel_size=9. According to my calculations, assume fusing
but now the effect of the code is like conv_transpose2d.
(I hope someone discussed with me)
(I omit inserted_zeros because I thought it always be zero after skimming the code and running the test)
It wired because there shouldn't be such bug because auto_LiRPA has gone through many tests, so please educate me if I miss something important.
Just to confirm, If I understand the design of
Patchclass correctly, thePatchclass with the shape[..., c, h, w, K1, K2]and parameterspadding, stride, inserted_zeros(=0)represents linear relation over the input entries for each entry in the$c\times h\times w$ space, in which each output entry was computed by inner-product of theK1 x K2kernel and a square subset from the input. And the position of square from the input is determined bypadding,stride, and the position of that entry like the convolution operator.