-
I have implemented a three-target variant of https://arxiv.org/pdf/1902.07987.pdf in pytorch.geometric: https://github.com/tferber/ECLML/blob/main/models/models.py using the GravNetConv layer (https://pytorch-geometric.readthedocs.io/en/latest/_modules/torch_geometric/nn/conv/gravnet_conv.html) that should simultaneously detect target classes and background fluctuations. With any reasonable learning rates and model parameters the model will detect class 3 (gray), but more or less assign 50/50 weights to classes 1 and 2 (blue and red): https://desycloud.desy.de/index.php/s/9cGfdECbaP2dx7g With a single event i can get the model to overtrain (https://desycloud.desy.de/index.php/s/w49B2mWbDEyRsRM) which makes me think that it works somewhat..., but as soon as i use a few events in a batch, the performance goes does significantly. Maybe I have a problem with the sparse block diagonal adjacency matrices? I wonder if there is any working example/test of the pytorch GravNetConv layer? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 3 replies
-
Sadly, we do not have a GravNetConv example yet (pinging @jkiesele just in case he wants to provide one :)) In general, block diagonal adjacency matrices should be automatically created using |
Beta Was this translation helpful? Give feedback.
Sadly, we do not have a GravNetConv example yet (pinging @jkiesele just in case he wants to provide one :))
In general, block diagonal adjacency matrices should be automatically created using
torch_geometric.data.DataLoader
. Can you show a small example? This might help to identify the issue.