Skip to content

Commit abc217a

Browse files
committed
Adjust doc, and fix tutorial
1 parent 0168942 commit abc217a

File tree

2 files changed

+14
-14
lines changed

2 files changed

+14
-14
lines changed

docs/src/benchmark_interfaces.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -11,10 +11,10 @@ All benchmarks work with [`DataSample`](@ref) objects that encapsulate the data
1111

1212
```julia
1313
@kwdef struct DataSample{I,F,S,C}
14-
x::F = nothing # Input features
15-
θ_true::C = nothing # True cost/utility parameters
16-
y_true::S = nothing # True optimal solution
17-
instance::I = nothing # Problem instance object/additional data
14+
x::F = nothing # Input features of the policy
15+
θ::C = nothing # Intermediate cost/utility parameters
16+
y::S = nothing # Output solution
17+
info::I = nothing # Additional data information (e.g., problem instance)
1818
end
1919
```
2020

docs/src/tutorials/warcraft_tutorial.jl

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -21,16 +21,16 @@ dataset = generate_dataset(b, 50);
2121
# Subdatasets can be created through regular slicing:
2222
train_dataset, test_dataset = dataset[1:45], dataset[46:50]
2323

24-
# And getting an individual sample will return a [`DataSample`](@ref) with four fields: `x`, `instance`, `θ`, and `y`:
24+
# And getting an individual sample will return a [`DataSample`](@ref) with four fields: `x`, `info`, `θ`, and `y`:
2525
sample = test_dataset[1]
2626
# `x` correspond to the input features, i.e. the input image (3D array) in the Warcraft benchmark case:
2727
x = sample.x
28-
# `θ_true` correspond to the true unknown terrain weights. We use the opposite of the true weights in order to formulate the optimization problem as a maximization problem:
29-
θ_true = sample.θ_true
30-
# `y_true` correspond to the optimal shortest path, encoded as a binary matrix:
31-
y_true = sample.y_true
32-
# `instance` is not used in this benchmark, therefore set to nothing:
33-
isnothing(sample.instance)
28+
# `θ` correspond to the true unknown terrain weights. We use the opposite of the true weights in order to formulate the optimization problem as a maximization problem:
29+
θ_true = sample.θ
30+
# `y` correspond to the optimal shortest path, encoded as a binary matrix:
31+
y_true = sample.y
32+
# `info` is not used in this benchmark, therefore set to nothing:
33+
isnothing(sample.info)
3434

3535
# For some benchmarks, we provide the following plotting method [`plot_data`](@ref) to visualize the data:
3636
plot_data(b, sample)
@@ -50,7 +50,7 @@ maximizer = generate_maximizer(b; dijkstra=true)
5050
# In the case o fthe Warcraft benchmark, the method has an additional keyword argument to chose the algorithm to use: Dijkstra's algorithm or Bellman-Ford algorithm.
5151
y = maximizer(θ)
5252
# As we can see, currently the pipeline predicts random noise as cell weights, and therefore the maximizer returns a straight line path.
53-
plot_data(b, DataSample(; x, θ_true=θ, y_true=y))
53+
plot_data(b, DataSample(; x, θ, y))
5454
# We can evaluate the current pipeline performance using the optimality gap metric:
5555
starting_gap = compute_gap(b, test_dataset, model, maximizer)
5656

@@ -70,7 +70,7 @@ opt_state = Flux.setup(Adam(1e-3), model)
7070
loss_history = Float64[]
7171
for epoch in 1:50
7272
val, grads = Flux.withgradient(model) do m
73-
sum(loss(m(x), y_true) for (; x, y_true) in train_dataset) / length(train_dataset)
73+
sum(loss(m(x), y) for (; x, y) in train_dataset) / length(train_dataset)
7474
end
7575
Flux.update!(opt_state, model, grads[1])
7676
push!(loss_history, val)
@@ -85,7 +85,7 @@ final_gap = compute_gap(b, test_dataset, model, maximizer)
8585
#
8686
θ = model(x)
8787
y = maximizer(θ)
88-
plot_data(b, DataSample(; x, θ_true=θ, y_true=y))
88+
plot_data(b, DataSample(; x, θ, y))
8989

9090
using Test #src
9191
@test final_gap < starting_gap #src

0 commit comments

Comments
 (0)