Releases
1.6.2
Compare
Sorry, something went wrong.
No results found
Features
Minor improvements
The SIGN example now operates on mini-batches of nodes
Improved data loading runtime of InMemoryDatasets
NeighborSampler does now work with SparseTensor as input
ToUndirected transform in order to convert directed graphs to undirected ones
GNNExplainer does now allow for customizable edge and node feature loss reduction
aggr can now passed to any GNN based on the MessagePassing interface (thanks to @m30m )
Runtime improvements in SEAL (thanks to @muhanzhang )
Runtime improvements in torch_geometric.utils.softmax (thanks to @Book1996 )
GAE.recon_loss now supports custom negative edge indices (thanks to @reshinthadithyan )
Faster spmm computation and random_walk sampling on CPU (torch-sparse and torch-cluster updates required)
DataParallel does now support the follow_batch argument
Parallel approximate PPR computation in the GDC transform (thanks to @klicperajo)
Improved documentation by providing an autosummary of all subpackages (thanks to @m30m )
Improved documentation on how edge weights are handled in various GNNs (thanks to @m30m )
Bugfixes
Fixed a bug in GATConv when computing attention coefficients in bipartite graphs
Fixed a bug in GraphSAINTSampler that led to wrong edge feature sampling
Fixed the DimeNet pretraining link
Fixed a bug in processing ego-twitter and ego-gplus of the SNAPDataset collection
Fixed a number of broken dataset URLs (ICEWS18, QM9, QM7b, MoleculeNet, Entities, PPI, Reddit, MNISTSuperpixels, ShapeNet)
Fixed a bug in which MessagePassing.jittable() tried to write to a file without permission (thanks to @twoertwein )
GCNConv does not require edge_weight in case normalize=False
Batch.num_graphs will now report the correct amount of graphs in case of zero-sized graphs
You can’t perform that action at this time.