Skip to content

Commit 11465f3

Browse files
andrewrosembergAndrew David Werner RosembergAndrew David Werner RosembergAndrew David Werner RosembergAndrew David Werner Rosemberg
authored
Flux constraints JuMP (#14)
* overloads * update * update * update * update * update * update * update * update * update * running code * code working * update working * update code * rm unwanted code * working code finally * relax solutions working * update best set-up * update code * update code * fix bug * update code * update to save data * add to git ignore * update * update * update * update * try fix * update code * add arrow compresser * update * update * update * fix * fix * working code * update * update * update * fix error * add comments * fix typo * update visuals * update * update code * fix * update code train * update * update * update * faster train float32 * update * update * update generation * update * update gitignore * update objective * fix update * update lambda * update fix * update * update * fix tests * update penalty * update with model save * update save train * update code * update train * update * update * update * update code * update vis --------- Co-authored-by: Andrew David Werner Rosemberg <[email protected]> Co-authored-by: Andrew David Werner Rosemberg <[email protected]> Co-authored-by: Andrew David Werner Rosemberg <[email protected]> Co-authored-by: Andrew David Werner Rosemberg <[email protected]>
1 parent 0bd7469 commit 11465f3

28 files changed

+1412
-40
lines changed

.gitignore

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,3 +8,13 @@ Manifest.toml
88
*.arrow
99
*.m
1010
*.csv
11+
.DS_Store
12+
*.out
13+
*.sbatch
14+
examples/unitcommitment/app/*
15+
*.edu
16+
examples/unitcommitment/wandb/*
17+
*.png
18+
*.jls
19+
*.jlso
20+
*.jld2

LocalPreferences.toml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
[CUDA_Runtime_jll]
2+
__clear__ = ["local"]
3+
version = "12.1"

Project.toml

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,9 +11,12 @@ Flux = "587475ba-b771-5e3f-ad9e-33799f191a9c"
1111
JuMP = "4076af6c-e467-56ae-b986-b466b2749572"
1212
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
1313
MLJFlux = "094fc8d1-fd35-5302-93ea-dabda2abf845"
14+
NNlib = "872c559c-99b0-510c-b3b7-b6c96a88d5cd"
1415
Nonconvex = "01bcebdf-4d21-426d-b5c4-6132c1619978"
16+
Optimisers = "3bd65402-5787-11e9-1adc-39752487f4e2"
1517
ParametricOptInterface = "0ce4ce61-57bf-432b-a095-efac525d185e"
1618
Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
19+
Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"
1720
UUIDs = "cf7118a7-6976-5b1a-9a39-7adc72f591a4"
1821
Zygote = "e88e6eb3-aa80-5325-afca-941959d7151f"
1922

@@ -22,21 +25,23 @@ Arrow = "2"
2225
CSV = "0.10"
2326
Dualization = "0.5"
2427
JuMP = "1"
25-
ParametricOptInterface = "0.5"
28+
ParametricOptInterface = "0.7"
29+
Zygote = "^0.6.68"
2630
julia = "1.6"
2731

2832
[extras]
2933
AbstractGPs = "99985d1d-32ba-4be9-9821-2ec096f28918"
34+
CUDA_Runtime_jll = "76a88914-d11a-5bdc-97e0-2f5a05c973a2"
3035
Clarabel = "61c947e1-3e6d-4ee4-985a-eec8c727bd6e"
3136
DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0"
3237
DelimitedFiles = "8bb1440f-4735-579b-a4ab-409b98df4dab"
3338
HiGHS = "87dc4568-4c63-4d18-b0c0-bb2238e4078b"
3439
Ipopt = "b6b21f68-93f8-5de0-b562-5493be1d77c9"
40+
MLJ = "add582a8-e3ab-11e8-2d5e-e98b27df1bc7"
3541
NonconvexNLopt = "b43a31b8-ff9b-442d-8e31-c163daa8ab75"
3642
PGLib = "07a8691f-3d11-4330-951b-3c50f98338be"
3743
PowerModels = "c36e90e8-916a-50a6-bd94-075b64ef4655"
3844
Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
39-
MLJ = "add582a8-e3ab-11e8-2d5e-e98b27df1bc7"
4045

4146
[targets]
4247
test = ["Test", "DelimitedFiles", "PGLib", "HiGHS", "PowerModels", "DataFrames", "Clarabel", "Ipopt", "NonconvexNLopt", "MLJ"]

README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ The user needs to first define a problem iterator:
2828
# The problem to iterate over
2929
model = Model(() -> POI.Optimizer(HiGHS.Optimizer()))
3030
@variable(model, x)
31-
p = @variable(model, p in POI.Parameter(1.0)) # The parameter (defined using POI)
31+
p = @variable(model, p in MOI.Parameter(1.0)) # The parameter (defined using POI)
3232
@constraint(model, cons, x + p >= 3)
3333
@objective(model, Min, 2x)
3434

@@ -42,7 +42,7 @@ problem_iterator = ProblemIterator(parameter_values)
4242
The parameter values of the problem iterator can be saved by simply:
4343

4444
```julia
45-
save(problem_iterator, "input_file.csv", CSVFile)
45+
save(problem_iterator, "input_file", CSVFile)
4646
```
4747

4848
Which creates the following CSV:
@@ -71,7 +71,7 @@ Then chose what values to record:
7171
recorder = Recorder{CSVFile}("output_file.csv", primal_variables=[x], dual_variables=[cons])
7272

7373
# Finally solve all problems described by the iterator
74-
solve_batch(model, problem_iterator, recorder)
74+
solve_batch(problem_iterator, recorder)
7575
```
7676

7777
Which creates the following CSV:
Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
using PyCall
2+
using Conda
3+
# Conda.add("huggingface_hub")
4+
5+
huggingface_hub = pyimport("huggingface_hub")
6+
7+
huggingface_hub.login(token=ENV["HUGGINGFACE_TOKEN"])
8+
9+
function download_dataset(organization, dataset, case_name, io_type; formulation="", cache_dir="./data/")
10+
dataset_url = joinpath(organization, dataset)
11+
if io_type ["input", "output"]
12+
throw(ArgumentError("io_type must be 'input' or 'output'."))
13+
end
14+
15+
if io_type == "input"
16+
data_path = joinpath(case_name, "input")
17+
else
18+
if formulation == ""
19+
throw(ArgumentError("Formulation must be specified for 'output' data."))
20+
end
21+
data_path = joinpath(case_name, "output", formulation)
22+
end
23+
24+
# Fetch the dataset from the provided URL
25+
huggingface_hub.snapshot_download(dataset_url, allow_patterns=["$data_path/*.arrow"], local_dir=cache_dir, repo_type="dataset", local_dir_use_symlinks=false)
26+
27+
return nothing
28+
end
29+
30+
cache_dir="./examples/powermodels/data/"
31+
organization = "L2O"
32+
dataset = "pglib_opf_solves"
33+
case_name = "pglib_opf_case300_ieee"
34+
formulation = "DCPPowerModel"
35+
io_type = "input"
36+
download_dataset(organization, dataset, case_name, io_type; cache_dir=cache_dir)
37+
38+
io_type = "output"
39+
download_dataset(organization, dataset, case_name, io_type; formulation=formulation , cache_dir=cache_dir)

examples/flux/flux_forecaster_script.jl

Lines changed: 12 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -3,26 +3,32 @@ TestEnv.activate()
33

44
using Arrow
55
using CSV
6+
using MLJFlux
67
using Flux
8+
using MLJ
79
using DataFrames
810
using PowerModels
911
using L2O
1012

1113
# Paths
1214
case_name = "pglib_opf_case300_ieee" # pglib_opf_case300_ieee # pglib_opf_case5_pjm
13-
network_formulation = SOCWRConicPowerModel # SOCWRConicPowerModel # DCPPowerModel
15+
network_formulation = DCPPowerModel # SOCWRConicPowerModel # DCPPowerModel
1416
filetype = ArrowFile # ArrowFile # CSVFile
1517
path_dataset = joinpath(pwd(), "examples", "powermodels", "data")
16-
case_file_path = joinpath(path_dataset, case_name, string(network_formulation))
18+
case_file_path = joinpath(path_dataset, case_name)
19+
case_file_path_output = joinpath(case_file_path, "output", string(network_formulation))
20+
case_file_path_input = joinpath(case_file_path, "input")
1721

1822
# Load input and output data tables
19-
iter_files = readdir(joinpath(case_file_path))
20-
iter_files = filter(x -> occursin(string(filetype), x), iter_files)
23+
iter_files_in = readdir(joinpath(case_file_path_input))
24+
iter_files_in = filter(x -> occursin(string(filetype), x), iter_files_in)
2125
file_ins = [
22-
joinpath(case_file_path, file) for file in iter_files if occursin("input", file)
26+
joinpath(case_file_path_input, file) for file in iter_files_in if occursin("input", file)
2327
]
28+
iter_files_out = readdir(joinpath(case_file_path_output))
29+
iter_files_out = filter(x -> occursin(string(filetype), x), iter_files_out)
2430
file_outs = [
25-
joinpath(case_file_path, file) for file in iter_files if occursin("output", file)
31+
joinpath(case_file_path_output, file) for file in iter_files_out if occursin("output", file)
2632
]
2733
batch_ids = [split(split(file, "_")[end], ".")[1] for file in file_ins]
2834

examples/powermodels/pglib_datagen.jl

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -127,6 +127,7 @@ function pm_primal_builder!(
127127
set_dual_variable!(recorder, real_balance)
128128
end
129129
end
130+
set_model!(recorder)
130131
return model, parameters, variable_refs
131132
end
132133

@@ -185,13 +186,13 @@ function generate_dataset_pglib(
185186
[network_data["load"]["$l"]["pd"] for l in 1:num_loads],
186187
[network_data["load"]["$l"]["qd"] for l in 1:num_loads],
187188
)
188-
p = load_parameter_factory(model, 1:num_inputs; load_set=POI.Parameter.(original_load))
189+
p = load_parameter_factory(model, 1:num_inputs; load_set=MOI.Parameter.(original_load))
189190

190191
# Build model and Recorder
191192
file = joinpath(
192193
data_sim_dir, case_name * "_" * string(network_formulation) * "_output_" * batch_id
193194
)
194-
recorder = Recorder{filetype}(file; filterfn=filterfn)
195+
recorder = Recorder{filetype}(file; filterfn=filterfn,model=model)
195196
pm_primal_builder!(
196197
model, p, network_data, network_formulation; recorder=recorder, record_duals=true
197198
)
@@ -250,7 +251,7 @@ function generate_worst_case_dataset_Nonconvex(
250251
[l["qd"] for l in values(network_data["load"])],
251252
)
252253
p = load_parameter_factory(
253-
model, 1:(num_loads * 2); load_set=POI.Parameter.(original_load)
254+
model, 1:(num_loads * 2); load_set=MOI.Parameter.(original_load)
254255
)
255256

256257
# Define batch id
@@ -265,7 +266,7 @@ function generate_worst_case_dataset_Nonconvex(
265266
data_sim_dir, case_name * "_" * string(network_formulation) * "_output_" * batch_id
266267
)
267268
recorder = Recorder{filetype}(
268-
file_output; filename_input=file_input, primal_variables=[], dual_variables=[]
269+
file_output; filename_input=file_input, primal_variables=[], dual_variables=[], model=model
269270
)
270271

271272
# Build model
@@ -372,7 +373,7 @@ function generate_worst_case_dataset(
372373
data_sim_dir, case_name * "_" * string(network_formulation) * "_output_" * batch_id
373374
)
374375
recorder = Recorder{filetype}(
375-
file_output; filename_input=file_input, primal_variables=[], dual_variables=[]
376+
file_output; filename_input=file_input, primal_variables=[], dual_variables=[], model=JuMP.Model() # dummy model
376377
)
377378

378379
# Solve all problems and record solutions
Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
[deps]
2+
Arrow = "69666777-d1a9-59fb-9406-91d4454c9d45"
3+
CSV = "336ed68f-0bac-5ca0-87d4-7b16caf5d00b"
4+
DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0"
5+
Gurobi = "2e9cd046-0924-5485-92f1-d5272153d98b"
6+
JuMP = "4076af6c-e467-56ae-b986-b466b2749572"
7+
L2O = "e1d8bfa7-c465-446a-84b9-451470f6e76c"
8+
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
9+
Logging = "56ddb016-857b-54e1-b83d-db4d58db5568"
10+
ParametricOptInterface = "0ce4ce61-57bf-432b-a095-efac525d185e"
11+
SparseArrays = "2f01184e-e22b-5df5-ae63-d93ebab69eaf"
12+
Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"
13+
UUIDs = "cf7118a7-6976-5b1a-9a39-7adc72f591a4"
14+
UnitCommitment = "64606440-39ea-11e9-0f29-3303a1d3d877"

0 commit comments

Comments
 (0)