You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The P1B1 hyperopt workflow evaluates a modified version of the P1B1 benchmark autoencoder using hyperparameters provided by a hyperopt instance. The P1B1 code (p1b1_baseline.py) has been modified to expose a functional interface. The neural net remains the same. Currently, hyperopt minimizes the validation loss.
158
156
159
157
See https://github.com/ECP-CANDLE/Supervisor/tree/master/workflows/p1b1_hyperopt for more details.
160
158
161
159
[[p1b1_mlrMBO]]
162
-
== p1b1_mlrMBO
160
+
=== p1b1_mlrMBO
163
161
164
162
The P1B1 mlrMBO workflow evaluates a modified version of the P1B1 benchmark autoencoder using hyperparameters provided by a mlrMBO instance. The P1B1 code (p1b1_baseline.py) has been modified to expose a functional interface. The neural net remains the same. Currently, mlrMBO minimizes the validation loss.
165
163
166
164
See https://github.com/ECP-CANDLE/Supervisor/tree/master/workflows/p1b1_mlrMBO for more details.
167
165
168
166
[[p1b3_mlrMBO]]
169
-
== p1b3_mlrMBO
167
+
=== p1b3_mlrMBO
170
168
171
169
The P1B3 mlrMBO workflow evaluates the P1B3 benchmark
172
170
using hyperparameters provided by a mlrMBO instance. mlrMBO
@@ -177,7 +175,7 @@ See https://github.com/ECP-CANDLE/Supervisor/tree/master/workflows/p1b3_mlrMBO f
The P2B1 mlrMBO workflow evaluates the P2B1 benchmark
183
181
using hyperparameters provided by a mlrMBO instance. mlrMBO
@@ -189,10 +187,54 @@ See https://github.com/ECP-CANDLE/Supervisor/tree/master/workflows/p2b1_mlrMBO f
189
187
190
188
191
189
[[nt3_mlrMBO]]
192
-
== nt3_mlrMBO
190
+
=== nt3_mlrMBO
193
191
194
192
See https://github.com/ECP-CANDLE/Supervisor/tree/master/workflows/nt3_mlrMBO for more details.
195
193
194
+
== Objective function guide
195
+
196
+
In CANDLE, *objective functions* are the calls to the machine learning (ML) models. They are functions that accept some parameter tuple describing how the model will be run, and return some value, such as a loss. Typical CANDLE workflows optimize the return value in some parameter space using some model exploration algorithm (ME).
197
+
198
+
This documents how to read existing objective functions and develop new ones.
199
+
200
+
=== Swift/T leaf functions
201
+
202
+
Objective functions are implemented as Swift/T leaf functions, which are http://swift-lang.github.io/swift-t/guide.html#leaf_functions[described here]. In short, leaf functions are opaque to Swift. For the purposes of CANDLE, a leaf function is a command line program or a call to evaluate a string of Python code in-memory. Normally, Swift/T is free to evaluate leaf functions anywhere in the system (load balancing) in any order (as long as all input data is ready).
+obj()+ is an objective function that takes parameters and returns a string to Swift. The parameters (+params+) are produced by the ME and are encoded as a JSON fragment. You can simply print them out in Swift (via +printf()+) to see them. A unique identifier +iter_indiv_id+ is also provided and used to create a unique output directory for +out.txt+ and +err.txt+. The model is actually executed in +run_model()+, described below. Then, its results are obtained by +get_results()+, and also logged to +stdout+ (via +printf()+).
This is a Swift +app+ function. Its body is a command line, populated with the input and output arguments. Thus, it runs +bash+ on a given script with the parameters, as specified in +obj()+. Some of the variables referenced in the body are Swift global variables. The special syntax +@stdout+, +@stderr+ capture those streams respectively.
0 commit comments