|
3 | 3 | { |
4 | 4 | "cell_type": "raw", |
5 | 5 | "metadata": { |
6 | | - "pycharm": { |
7 | | - "name": "#%% raw\n" |
8 | | - }, |
9 | 6 | "raw_mimetype": "text/restructuredtext" |
10 | 7 | }, |
11 | 8 | "source": [ |
|
14 | 11 | }, |
15 | 12 | { |
16 | 13 | "cell_type": "markdown", |
17 | | - "metadata": { |
18 | | - "pycharm": { |
19 | | - "name": "#%% md\n" |
20 | | - } |
21 | | - }, |
| 14 | + "metadata": {}, |
22 | 15 | "source": [ |
23 | 16 | "# Hyperparameters" |
24 | 17 | ] |
25 | 18 | }, |
26 | 19 | { |
27 | 20 | "cell_type": "raw", |
28 | 21 | "metadata": { |
29 | | - "pycharm": { |
30 | | - "name": "#%% raw\n" |
31 | | - }, |
32 | 22 | "raw_mimetype": "text/restructuredtext" |
33 | 23 | }, |
34 | 24 | "source": [ |
|
40 | 30 | }, |
41 | 31 | { |
42 | 32 | "cell_type": "markdown", |
43 | | - "metadata": { |
44 | | - "pycharm": { |
45 | | - "name": "#%% md\n" |
46 | | - } |
47 | | - }, |
| 33 | + "metadata": {}, |
48 | 34 | "source": [ |
49 | 35 | "Most algoriths have **hyperparameters**. For some optimization methods the parameters are already defined and can directly be optimized. For instance, for Differential Evolution (DE) the parameters can be found by:" |
50 | 36 | ] |
51 | 37 | }, |
52 | 38 | { |
53 | 39 | "cell_type": "code", |
54 | 40 | "execution_count": null, |
55 | | - "metadata": { |
56 | | - "execution": { |
57 | | - "iopub.execute_input": "2022-08-01T02:36:48.308627Z", |
58 | | - "iopub.status.busy": "2022-08-01T02:36:48.308160Z", |
59 | | - "iopub.status.idle": "2022-08-01T02:36:48.364512Z", |
60 | | - "shell.execute_reply": "2022-08-01T02:36:48.363614Z" |
61 | | - }, |
62 | | - "pycharm": { |
63 | | - "name": "#%%\n" |
64 | | - } |
65 | | - }, |
| 41 | + "metadata": {}, |
66 | 42 | "outputs": [], |
67 | 43 | "source": [ |
68 | 44 | "import json\n", |
|
75 | 51 | }, |
76 | 52 | { |
77 | 53 | "cell_type": "markdown", |
78 | | - "metadata": { |
79 | | - "pycharm": { |
80 | | - "name": "#%% md\n" |
81 | | - } |
82 | | - }, |
| 54 | + "metadata": {}, |
83 | 55 | "source": [ |
84 | 56 | "If not provided directly, when initializing a `HyperparameterProblem` these variables are directly used for optimization." |
85 | 57 | ] |
86 | 58 | }, |
87 | 59 | { |
88 | 60 | "cell_type": "markdown", |
89 | | - "metadata": { |
90 | | - "pycharm": { |
91 | | - "name": "#%% md\n" |
92 | | - } |
93 | | - }, |
| 61 | + "metadata": {}, |
94 | 62 | "source": [ |
95 | 63 | "Secondly, one needs to define what exactly should be optimized. For instance, for a single run on a problem (with a fixed random seed) using the well-known parameter optimization toolkit [Optuna](https://optuna.org), the implementation may look as follows:" |
96 | 64 | ] |
97 | 65 | }, |
98 | 66 | { |
99 | 67 | "cell_type": "code", |
100 | 68 | "execution_count": null, |
101 | | - "metadata": { |
102 | | - "execution": { |
103 | | - "iopub.execute_input": "2022-08-01T02:36:48.369753Z", |
104 | | - "iopub.status.busy": "2022-08-01T02:36:48.369415Z", |
105 | | - "iopub.status.idle": "2022-08-01T02:36:59.415863Z", |
106 | | - "shell.execute_reply": "2022-08-01T02:36:59.414988Z" |
107 | | - }, |
108 | | - "pycharm": { |
109 | | - "name": "#%%\n" |
110 | | - } |
111 | | - }, |
| 69 | + "metadata": {}, |
112 | 70 | "outputs": [], |
113 | 71 | "source": [ |
114 | 72 | "from pymoo.algorithms.hyperparameters import SingleObjectiveSingleRun, HyperparameterProblem\n", |
|
141 | 99 | }, |
142 | 100 | { |
143 | 101 | "cell_type": "markdown", |
144 | | - "metadata": { |
145 | | - "pycharm": { |
146 | | - "name": "#%% md\n" |
147 | | - } |
148 | | - }, |
| 102 | + "metadata": {}, |
149 | 103 | "source": [ |
150 | 104 | "Of course, you can also directly use the `MixedVariableGA` available in our framework:" |
151 | 105 | ] |
152 | 106 | }, |
153 | 107 | { |
154 | 108 | "cell_type": "code", |
155 | 109 | "execution_count": null, |
156 | | - "metadata": { |
157 | | - "execution": { |
158 | | - "iopub.execute_input": "2022-08-01T02:36:59.419480Z", |
159 | | - "iopub.status.busy": "2022-08-01T02:36:59.419084Z", |
160 | | - "iopub.status.idle": "2022-08-01T02:37:05.995629Z", |
161 | | - "shell.execute_reply": "2022-08-01T02:37:05.994612Z" |
162 | | - }, |
163 | | - "pycharm": { |
164 | | - "name": "#%%\n" |
165 | | - } |
166 | | - }, |
| 110 | + "metadata": {}, |
167 | 111 | "outputs": [], |
168 | 112 | "source": [ |
169 | 113 | "from pymoo.algorithms.hyperparameters import SingleObjectiveSingleRun, HyperparameterProblem\n", |
|
198 | 142 | }, |
199 | 143 | { |
200 | 144 | "cell_type": "markdown", |
201 | | - "metadata": { |
202 | | - "pycharm": { |
203 | | - "name": "#%% md\n" |
204 | | - } |
205 | | - }, |
| 145 | + "metadata": {}, |
206 | 146 | "source": [ |
207 | 147 | "Now, optimizing the parameters for a **single random seed** is often not desirable. And this is precisely what makes hyper-parameter optimization computationally expensive. So instead of using just a single random seed, we can use the `MultiRun` performance assessment to average over multiple runs as follows:" |
208 | 148 | ] |
209 | 149 | }, |
210 | 150 | { |
211 | 151 | "cell_type": "code", |
212 | 152 | "execution_count": null, |
213 | | - "metadata": { |
214 | | - "execution": { |
215 | | - "iopub.execute_input": "2022-08-01T02:37:06.000183Z", |
216 | | - "iopub.status.busy": "2022-08-01T02:37:05.999864Z", |
217 | | - "iopub.status.idle": "2022-08-01T02:37:21.459474Z", |
218 | | - "shell.execute_reply": "2022-08-01T02:37:21.458554Z" |
219 | | - }, |
220 | | - "pycharm": { |
221 | | - "name": "#%%\n" |
222 | | - } |
223 | | - }, |
| 153 | + "metadata": {}, |
224 | 154 | "outputs": [], |
225 | 155 | "source": [ |
226 | 156 | "from pymoo.algorithms.hyperparameters import HyperparameterProblem, MultiRun, stats_single_objective_mean\n", |
|
255 | 185 | }, |
256 | 186 | { |
257 | 187 | "cell_type": "markdown", |
258 | | - "metadata": { |
259 | | - "pycharm": { |
260 | | - "name": "#%% md\n" |
261 | | - } |
262 | | - }, |
| 188 | + "metadata": {}, |
263 | 189 | "source": [ |
264 | 190 | "Another way of performance measure is the number of evaluations until a specific goal has been reached. For single-objective optimization, such a goal is most likely until a minimum function value has been found. Thus, for the termination, we use `MinimumFunctionValueTermination` with a value of `1e-5`. We run the method for each random seed until this value has been reached or at most `500` function evaluations have taken place. The performance is then measured by the average number of function evaluations (`func_stats=stats_avg_nevals`) to reach the goal." |
265 | 191 | ] |
266 | 192 | }, |
267 | 193 | { |
268 | 194 | "cell_type": "code", |
269 | 195 | "execution_count": null, |
270 | | - "metadata": { |
271 | | - "execution": { |
272 | | - "iopub.execute_input": "2022-08-01T02:37:21.462989Z", |
273 | | - "iopub.status.busy": "2022-08-01T02:37:21.462728Z", |
274 | | - "iopub.status.idle": "2022-08-01T02:37:38.013305Z", |
275 | | - "shell.execute_reply": "2022-08-01T02:37:38.012403Z" |
276 | | - }, |
277 | | - "pycharm": { |
278 | | - "name": "#%%\n" |
279 | | - } |
280 | | - }, |
| 196 | + "metadata": {}, |
281 | 197 | "outputs": [], |
282 | 198 | "source": [ |
283 | 199 | "from pymoo.algorithms.hyperparameters import HyperparameterProblem, MultiRun, stats_avg_nevals\n", |
|
313 | 229 | ] |
314 | 230 | } |
315 | 231 | ], |
316 | | - "metadata": {}, |
| 232 | + "metadata": { |
| 233 | + "language_info": { |
| 234 | + "codemirror_mode": { |
| 235 | + "name": "ipython", |
| 236 | + "version": 3 |
| 237 | + }, |
| 238 | + "file_extension": ".py", |
| 239 | + "mimetype": "text/x-python", |
| 240 | + "name": "python", |
| 241 | + "nbconvert_exporter": "python", |
| 242 | + "pygments_lexer": "ipython3" |
| 243 | + } |
| 244 | + }, |
317 | 245 | "nbformat": 4, |
318 | 246 | "nbformat_minor": 4 |
319 | 247 | } |
0 commit comments