|
309 | 309 | "cell_type": "markdown", |
310 | 310 | "metadata": {}, |
311 | 311 | "source": [ |
312 | | - "## 4. Saving, loading and restarting\n", |
| 312 | + "## 4. Saving and loading the optimizer\n", |
313 | 313 | "\n", |
314 | | - "By default you can follow the progress of your optimization by setting `verbose>0` when instantiating the `BayesianOptimization` object. If you need more control over logging/alerting you will need to use an observer. For more information about observers checkout the advanced tour notebook. Here we will only see how to use the native `JSONLogger` object to save to and load progress from files.\n", |
| 314 | + "The optimizer state can be saved to a file and loaded from a file. This is useful for continuing an optimization from a previous state, or for analyzing the optimization history without running the optimizer again.\n", |
315 | 315 | "\n", |
316 | | - "### 4.1 Saving progress" |
317 | | - ] |
318 | | - }, |
319 | | - { |
320 | | - "cell_type": "code", |
321 | | - "execution_count": 14, |
322 | | - "metadata": {}, |
323 | | - "outputs": [], |
324 | | - "source": [ |
325 | | - "from bayes_opt.logger import JSONLogger\n", |
326 | | - "from bayes_opt.event import Events" |
327 | | - ] |
328 | | - }, |
329 | | - { |
330 | | - "cell_type": "markdown", |
331 | | - "metadata": {}, |
332 | | - "source": [ |
333 | | - "The observer paradigm works by:\n", |
334 | | - "1. Instantiating an observer object.\n", |
335 | | - "2. Tying the observer object to a particular event fired by an optimizer.\n", |
336 | | - "\n", |
337 | | - "The `BayesianOptimization` object fires a number of internal events during optimization, in particular, every time it probes the function and obtains a new parameter-target combination it will fire an `Events.OPTIMIZATION_STEP` event, which our logger will listen to.\n", |
338 | | - "\n", |
339 | | - "**Caveat:** The logger will not look back at previously probed points." |
340 | | - ] |
341 | | - }, |
342 | | - { |
343 | | - "cell_type": "code", |
344 | | - "execution_count": 15, |
345 | | - "metadata": {}, |
346 | | - "outputs": [], |
347 | | - "source": [ |
348 | | - "logger = JSONLogger(path=\"./logs.log\")\n", |
349 | | - "optimizer.subscribe(Events.OPTIMIZATION_STEP, logger)" |
350 | | - ] |
351 | | - }, |
352 | | - { |
353 | | - "cell_type": "code", |
354 | | - "execution_count": 16, |
355 | | - "metadata": {}, |
356 | | - "outputs": [ |
357 | | - { |
358 | | - "name": "stdout", |
359 | | - "output_type": "stream", |
360 | | - "text": [ |
361 | | - "| iter | target | x | y |\n", |
362 | | - "-------------------------------------------------\n", |
363 | | - "| \u001b[39m13 \u001b[39m | \u001b[39m-2.96 \u001b[39m | \u001b[39m-1.989407\u001b[39m | \u001b[39m0.9536339\u001b[39m |\n", |
364 | | - "| \u001b[39m14 \u001b[39m | \u001b[39m-0.7135 \u001b[39m | \u001b[39m1.0509704\u001b[39m | \u001b[39m1.7803462\u001b[39m |\n", |
365 | | - "| \u001b[39m15 \u001b[39m | \u001b[39m-18.33 \u001b[39m | \u001b[39m-1.976933\u001b[39m | \u001b[39m-2.927535\u001b[39m |\n", |
366 | | - "| \u001b[35m16 \u001b[39m | \u001b[35m0.9097 \u001b[39m | \u001b[35m-0.228312\u001b[39m | \u001b[35m0.8046706\u001b[39m |\n", |
367 | | - "| \u001b[35m17 \u001b[39m | \u001b[35m0.913 \u001b[39m | \u001b[35m0.2069253\u001b[39m | \u001b[35m1.2101397\u001b[39m |\n", |
368 | | - "=================================================\n" |
369 | | - ] |
370 | | - } |
371 | | - ], |
372 | | - "source": [ |
373 | | - "optimizer.maximize(\n", |
374 | | - " init_points=2,\n", |
375 | | - " n_iter=3,\n", |
376 | | - ")" |
377 | | - ] |
378 | | - }, |
379 | | - { |
380 | | - "cell_type": "markdown", |
381 | | - "metadata": {}, |
382 | | - "source": [ |
383 | | - "### 4.2 Loading progress\n", |
384 | | - "\n", |
385 | | - "Naturally, if you stored progress you will be able to load that onto a new instance of `BayesianOptimization`. The easiest way to do it is by invoking the `load_logs` function, from the `util` submodule." |
386 | | - ] |
387 | | - }, |
388 | | - { |
389 | | - "cell_type": "code", |
390 | | - "execution_count": 17, |
391 | | - "metadata": {}, |
392 | | - "outputs": [], |
393 | | - "source": [ |
394 | | - "from bayes_opt.util import load_logs" |
395 | | - ] |
396 | | - }, |
397 | | - { |
398 | | - "cell_type": "code", |
399 | | - "execution_count": 18, |
400 | | - "metadata": {}, |
401 | | - "outputs": [ |
402 | | - { |
403 | | - "name": "stdout", |
404 | | - "output_type": "stream", |
405 | | - "text": [ |
406 | | - "0\n" |
407 | | - ] |
408 | | - } |
409 | | - ], |
410 | | - "source": [ |
411 | | - "new_optimizer = BayesianOptimization(\n", |
412 | | - " f=black_box_function,\n", |
413 | | - " pbounds={\"x\": (-3, 3), \"y\": (-3, 3)},\n", |
414 | | - " verbose=2,\n", |
415 | | - " random_state=7,\n", |
416 | | - ")\n", |
417 | | - "print(len(new_optimizer.space))" |
418 | | - ] |
419 | | - }, |
420 | | - { |
421 | | - "cell_type": "code", |
422 | | - "execution_count": 19, |
423 | | - "metadata": {}, |
424 | | - "outputs": [], |
425 | | - "source": [ |
426 | | - "load_logs(new_optimizer, logs=[\"./logs.log\"]);" |
427 | | - ] |
428 | | - }, |
429 | | - { |
430 | | - "cell_type": "code", |
431 | | - "execution_count": 20, |
432 | | - "metadata": {}, |
433 | | - "outputs": [ |
434 | | - { |
435 | | - "name": "stdout", |
436 | | - "output_type": "stream", |
437 | | - "text": [ |
438 | | - "New optimizer is now aware of 5 points.\n" |
439 | | - ] |
440 | | - } |
441 | | - ], |
442 | | - "source": [ |
443 | | - "print(\"New optimizer is now aware of {} points.\".format(len(new_optimizer.space)))" |
444 | | - ] |
445 | | - }, |
446 | | - { |
447 | | - "cell_type": "code", |
448 | | - "execution_count": 21, |
449 | | - "metadata": {}, |
450 | | - "outputs": [ |
451 | | - { |
452 | | - "name": "stdout", |
453 | | - "output_type": "stream", |
454 | | - "text": [ |
455 | | - "| iter | target | x | y |\n", |
456 | | - "-------------------------------------------------\n", |
457 | | - "| \u001b[39m1 \u001b[39m | \u001b[39m-14.44 \u001b[39m | \u001b[39m2.9959766\u001b[39m | \u001b[39m-1.541659\u001b[39m |\n", |
458 | | - "| \u001b[39m2 \u001b[39m | \u001b[39m-3.938 \u001b[39m | \u001b[39m-0.992603\u001b[39m | \u001b[39m2.9881975\u001b[39m |\n", |
459 | | - "| \u001b[39m3 \u001b[39m | \u001b[39m-11.67 \u001b[39m | \u001b[39m2.9842190\u001b[39m | \u001b[39m2.9398042\u001b[39m |\n", |
460 | | - "| \u001b[39m4 \u001b[39m | \u001b[39m-11.43 \u001b[39m | \u001b[39m-2.966518\u001b[39m | \u001b[39m2.9062210\u001b[39m |\n", |
461 | | - "| \u001b[39m5 \u001b[39m | \u001b[39m0.3045 \u001b[39m | \u001b[39m-0.564519\u001b[39m | \u001b[39m1.6138208\u001b[39m |\n", |
462 | | - "| \u001b[39m6 \u001b[39m | \u001b[39m-3.176 \u001b[39m | \u001b[39m0.4898552\u001b[39m | \u001b[39m2.9838862\u001b[39m |\n", |
463 | | - "| \u001b[39m7 \u001b[39m | \u001b[39m0.05155 \u001b[39m | \u001b[39m0.7608462\u001b[39m | \u001b[39m0.3920796\u001b[39m |\n", |
464 | | - "| \u001b[39m8 \u001b[39m | \u001b[39m-0.2096 \u001b[39m | \u001b[39m-0.196874\u001b[39m | \u001b[39m-0.082066\u001b[39m |\n", |
465 | | - "| \u001b[39m9 \u001b[39m | \u001b[39m0.822 \u001b[39m | \u001b[39m0.2125014\u001b[39m | \u001b[39m0.6354894\u001b[39m |\n", |
466 | | - "| \u001b[39m10 \u001b[39m | \u001b[39m0.2598 \u001b[39m | \u001b[39m-0.769932\u001b[39m | \u001b[39m0.6160238\u001b[39m |\n", |
467 | | - "=================================================\n" |
468 | | - ] |
469 | | - } |
470 | | - ], |
471 | | - "source": [ |
472 | | - "new_optimizer.maximize(\n", |
473 | | - " init_points=0,\n", |
474 | | - " n_iter=10,\n", |
475 | | - ")" |
476 | | - ] |
477 | | - }, |
478 | | - { |
479 | | - "cell_type": "markdown", |
480 | | - "metadata": {}, |
481 | | - "source": [ |
482 | | - "## 5. Saving and loading the optimizer state\n", |
483 | | - "\n", |
484 | | - "The optimizer state can be saved to a file and loaded from a file. This is useful for continuing an optimization from a previous state, or for analyzing the optimization history without running the optimizer again." |
| 316 | + "Note: if you are using your own custom acquisition function, you will need to save and load the acquisition function state as well. This is done by calling the `get_acquisition_params` and `set_acquisition_params` methods of the acquisition function. See the acquisition function documentation for more information." |
485 | 317 | ] |
486 | 318 | }, |
487 | 319 | { |
488 | 320 | "cell_type": "markdown", |
489 | 321 | "metadata": {}, |
490 | 322 | "source": [ |
491 | | - "### 5.1 Saving the optimizer state\n", |
| 323 | + "### 4.1 Saving the optimizer state\n", |
492 | 324 | "\n", |
493 | 325 | "The optimizer state can be saved to a file using the `save_state` method.\n", |
494 | 326 | "optimizer.save_state(\"./optimizer_state.json\")" |
|
507 | 339 | "cell_type": "markdown", |
508 | 340 | "metadata": {}, |
509 | 341 | "source": [ |
510 | | - "## 5.2 Loading the optimizer state\n", |
| 342 | + "## 4.2 Loading the optimizer state\n", |
511 | 343 | "\n", |
512 | 344 | "To load with a previously saved state, pass the path of your saved state file to the `load_state_path` parameter. Note that if you've changed the bounds of your parameters, you'll need to pass the updated bounds to the new optimizer.\n" |
513 | 345 | ] |
|
0 commit comments