Skip to content

Commit a8b5ba9

Browse files
Merge pull request #57 from CarloLucibello/cl/graph
drop LightGraphs for Graphs
2 parents 63745a5 + c77bacc commit a8b5ba9

20 files changed

+66
-62
lines changed

Project.toml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,17 @@
11
name = "GraphNeuralNetworks"
22
uuid = "cffab07f-9bc2-4db1-8861-388f63bf7694"
33
authors = ["Carlo Lucibello and contributors"]
4-
version = "0.2.3"
4+
version = "0.3.0"
55

66
[deps]
77
Adapt = "79e6a3ab-5dfb-504d-930d-738a2a938a0e"
88
CUDA = "052768ef-5323-5732-b1bb-66c8b64840ba"
99
ChainRulesCore = "d360d2e6-b24c-11e9-a2a3-2a2ae2dbcce4"
1010
DataStructures = "864edb3b-99cc-5e75-8d2d-829cb0a9cfe8"
1111
Flux = "587475ba-b771-5e3f-ad9e-33799f191a9c"
12+
Graphs = "86223c79-3864-5bf0-83f7-82e725a168b6"
1213
KrylovKit = "0b1a1467-8014-51b9-945f-bf0ae24f4b77"
1314
LearnBase = "7f8f8fb0-2700-5f03-b4bd-41f8cfc144b6"
14-
LightGraphs = "093fc24a-ae57-5d10-9952-331d41423f4d"
1515
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
1616
MacroTools = "1914dd2f-81c6-5fcd-8719-6d5c9610ff09"
1717
NNlib = "872c559c-99b0-510c-b3b7-b6c96a88d5cd"
@@ -26,9 +26,9 @@ CUDA = "3.3"
2626
ChainRulesCore = "1"
2727
DataStructures = "0.18"
2828
Flux = "0.12.7"
29+
Graphs = "1.4"
2930
KrylovKit = "0.5"
3031
LearnBase = "0.4, 0.5"
31-
LightGraphs = "1.3"
3232
MacroTools = "0.5"
3333
NNlib = "0.7"
3434
NNlibCUDA = "0.1"

docs/Project.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,6 @@
22
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
33
Flux = "587475ba-b771-5e3f-ad9e-33799f191a9c"
44
GraphNeuralNetworks = "cffab07f-9bc2-4db1-8861-388f63bf7694"
5-
LightGraphs = "093fc24a-ae57-5d10-9952-331d41423f4d"
5+
Graphs = "86223c79-3864-5bf0-83f7-82e725a168b6"
66
NNlib = "872c559c-99b0-510c-b3b7-b6c96a88d5cd"
77
SparseArrays = "2f01184e-e22b-5df5-ae63-d93ebab69eaf"

docs/make.jl

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
1-
using Flux, NNlib, GraphNeuralNetworks, LightGraphs, SparseArrays
1+
using Flux, NNlib, GraphNeuralNetworks, Graphs, SparseArrays
22
using Documenter
33

44
DocMeta.setdocmeta!(GraphNeuralNetworks, :DocTestSetup, :(using GraphNeuralNetworks); recursive=true)
55

66
makedocs(;
7-
modules=[GraphNeuralNetworks, NNlib, Flux, LightGraphs, SparseArrays],
7+
modules=[GraphNeuralNetworks, NNlib, Flux, Graphs, SparseArrays],
88
doctest=false, clean=true,
99
sitename = "GraphNeuralNetworks.jl",
1010
pages = ["Home" => "index.md",

docs/src/api/gnngraph.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,5 +24,5 @@ Private = false
2424
```@docs
2525
Flux.batch
2626
SparseArrays.blockdiag
27-
LightGraphs.adjacency_matrix
27+
Graphs.adjacency_matrix
2828
```

docs/src/gnngraph.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -9,10 +9,10 @@ operators, gpu movement, and storage of node/edge/graph related feature arrays.
99
A GNNGraph can be created from several different data sources encoding the graph topology:
1010

1111
```julia
12-
using GraphNeuralNetworks, LightGraphs, SparseArrays
12+
using GraphNeuralNetworks, Graphs, SparseArrays
1313

1414

15-
# Construct GNNGraph from From LightGraphs's graph
15+
# Construct GNNGraph from From Graphs's graph
1616
lg = erdos_renyi(10, 30)
1717
g = GNNGraph(lg)
1818

@@ -70,7 +70,7 @@ g.ndata.y, g.ndata.x
7070

7171
# Attach an array with edge features.
7272
# Since `GNNGraph`s are directed, the number of edges
73-
# will be double that of the original LightGraphs' undirected graph.
73+
# will be double that of the original Graphs' undirected graph.
7474
g = GNNGraph(erdos_renyi(10, 30), edata = rand(Float32, 60))
7575
@assert g.num_edges == 60
7676

@@ -134,10 +134,10 @@ g′ = remove_self_loops(g)
134134

135135
## JuliaGraphs ecosystem integration
136136

137-
Since `GNNGraph <: LightGraphs.AbstractGraph`, we can use any functionality from LightGraphs.
137+
Since `GNNGraph <: Graphs.AbstractGraph`, we can use any functionality from Graphs.
138138

139139
```julia
140-
@assert LightGraphs.isdirected(g)
140+
@assert Graphs.isdirected(g)
141141
```
142142

143143
## GPU movement

docs/src/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ First, we create our dataset consisting in multiple random graphs and associated
2727
Then we batch the graphs together into a unique graph.
2828

2929
```julia
30-
julia> using GraphNeuralNetworks, LightGraphs, Flux, CUDA, Statistics
30+
julia> using GraphNeuralNetworks, Graphs, Flux, CUDA, Statistics
3131

3232
julia> all_graphs = GNNGraph[];
3333

docs/src/messagepassing.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ The convolution reads
4949
We will also add a bias and an activation function.
5050

5151
```julia
52-
using Flux, LightGraphs, GraphNeuralNetworks
52+
using Flux, Graphs, GraphNeuralNetworks
5353

5454
struct GCN{A<:AbstractMatrix, B, F} <: GNNLayer
5555
weight::A

docs/src/models.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ In the explicit modeling style, the model is created according to the following
2020

2121
Here is an example of this construction:
2222
```julia
23-
using Flux, LightGraphs, GraphNeuralNetworks
23+
using Flux, Graphs, GraphNeuralNetworks
2424

2525
struct GNN # step 1
2626
conv1
@@ -71,7 +71,7 @@ to layers subtyping the [`GNNLayer`](@ref) abstract type.
7171
Using `GNNChain`, the previous example becomes
7272

7373
```julia
74-
using Flux, LightGraphs, GraphNeuralNetworks
74+
using Flux, Graphs, GraphNeuralNetworks
7575

7676
din, d, dout = 3, 4, 2
7777
g = GNNGraph(random_regular_graph(10, 4))

examples/Project.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ DiffEqFlux = "aae7a2af-3d4f-5e19-a356-7da93b79d9d0"
44
DifferentialEquations = "0c46a032-eb83-5123-abaf-570d42b7fbaa"
55
Flux = "587475ba-b771-5e3f-ad9e-33799f191a9c"
66
GraphNeuralNetworks = "cffab07f-9bc2-4db1-8861-388f63bf7694"
7-
LightGraphs = "093fc24a-ae57-5d10-9952-331d41423f4d"
7+
Graphs = "86223c79-3864-5bf0-83f7-82e725a168b6"
88
MLDatasets = "eb30cadb-4394-5ae3-aed4-317e484a6458"
99
NNlib = "872c559c-99b0-510c-b3b7-b6c96a88d5cd"
1010
NNlibCUDA = "a00861dc-f156-4864-bf3c-e6376f28a68d"

examples/node_classification_cora.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ CUDA.allowscalar(false)
1212
function eval_loss_accuracy(X, y, ids, model, g)
1313
= model(g, X)
1414
l = logitcrossentropy(ŷ[:,ids], y[:,ids])
15-
acc = mean(onecold(ŷ[:,ids] |> cpu) .== onecold(y[:,ids] |> cpu))
15+
acc = mean(onecold(ŷ[:,ids]) .== onecold(y[:,ids]))
1616
return (loss = round(l, digits=4), acc = round(acc*100, digits=2))
1717
end
1818

0 commit comments

Comments
 (0)