Skip to content

Commit 2f8ab64

Browse files
committed
remove former saving functionality from notebooks
1 parent 02d2643 commit 2f8ab64

File tree

2 files changed

+24
-186
lines changed

2 files changed

+24
-186
lines changed

examples/acquisition_functions.ipynb

Lines changed: 19 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -54,25 +54,25 @@
5454
" 'Gaussian Process and Utility Function After {} Steps'.format(steps),\n",
5555
" fontsize=30\n",
5656
" )\n",
57-
" \n",
58-
" gs = gridspec.GridSpec(2, 1, height_ratios=[3, 1]) \n",
57+
"\n",
58+
" gs = gridspec.GridSpec(2, 1, height_ratios=[3, 1])\n",
5959
" axis = plt.subplot(gs[0])\n",
6060
" acq = plt.subplot(gs[1])\n",
61-
" \n",
61+
"\n",
6262
" x_obs = np.array([[res[\"params\"][\"x\"]] for res in optimizer.res])\n",
6363
" y_obs = np.array([res[\"target\"] for res in optimizer.res])\n",
64-
" \n",
64+
"\n",
6565
" acquisition_function_._fit_gp(optimizer._gp, optimizer._space)\n",
6666
" mu, sigma = posterior(optimizer, x)\n",
6767
"\n",
6868
" axis.plot(x, y, linewidth=3, label='Target')\n",
6969
" axis.plot(x_obs.flatten(), y_obs, 'D', markersize=8, label=u'Observations', color='r')\n",
7070
" axis.plot(x, mu, '--', color='k', label='Prediction')\n",
7171
"\n",
72-
" axis.fill(np.concatenate([x, x[::-1]]), \n",
72+
" axis.fill(np.concatenate([x, x[::-1]]),\n",
7373
" np.concatenate([mu - 1.9600 * sigma, (mu + 1.9600 * sigma)[::-1]]),\n",
7474
" alpha=.6, fc='c', ec='None', label='95% confidence interval')\n",
75-
" \n",
75+
"\n",
7676
" axis.set_xlim((-2, 10))\n",
7777
" axis.set_ylim((None, None))\n",
7878
" axis.set_ylabel('f(x)', fontdict={'size':20})\n",
@@ -82,13 +82,13 @@
8282
" x = x.flatten()\n",
8383
"\n",
8484
" acq.plot(x, utility, label='Utility Function', color='purple')\n",
85-
" acq.plot(x[np.argmax(utility)], np.max(utility), '*', markersize=15, \n",
85+
" acq.plot(x[np.argmax(utility)], np.max(utility), '*', markersize=15,\n",
8686
" label=u'Next Best Guess', markerfacecolor='gold', markeredgecolor='k', markeredgewidth=1)\n",
8787
" acq.set_xlim((-2, 10))\n",
8888
" #acq.set_ylim((0, np.max(utility) + 0.5))\n",
8989
" acq.set_ylabel('Utility', fontdict={'size':20})\n",
9090
" acq.set_xlabel('x', fontdict={'size':20})\n",
91-
" \n",
91+
"\n",
9292
" axis.legend(loc=2, bbox_to_anchor=(1.01, 1), borderaxespad=0.)\n",
9393
" acq.legend(loc=2, bbox_to_anchor=(1.01, 1), borderaxespad=0.)\n",
9494
" return fig, fig.axes"
@@ -110,7 +110,7 @@
110110
"class GreedyAcquisition(acquisition.AcquisitionFunction):\n",
111111
" def __init__(self, random_state=None):\n",
112112
" super().__init__(random_state)\n",
113-
" \n",
113+
"\n",
114114
" def base_acq(self, mean, std):\n",
115115
" return mean # disregard std"
116116
]
@@ -357,11 +357,17 @@
357357
]
358358
},
359359
{
360-
"cell_type": "code",
361-
"execution_count": null,
360+
"cell_type": "markdown",
362361
"metadata": {},
363-
"outputs": [],
364-
"source": []
362+
"source": [
363+
"### Saving and loading with custom acquisition functions\n",
364+
"\n",
365+
"If you are using your own custom acquisition function, you will need to save and load the acquisition function state as well. Acquisition functions have a `get_acquisition_params` and `set_acquisition_params` method that can be used to save and load the acquisition function state. The get_acquisition_params method returns a dictionary containing the acquisition function parameters. The set_acquisition_params method takes a dictionary containing the acquisition function parameters and updates the acquisition function state.\n",
366+
"\n",
367+
"```python\n",
368+
"acquisition_function.get_acquisition_params()\n",
369+
"```"
370+
]
365371
},
366372
{
367373
"cell_type": "code",

examples/basic-tour.ipynb

Lines changed: 5 additions & 173 deletions
Original file line numberDiff line numberDiff line change
@@ -309,186 +309,18 @@
309309
"cell_type": "markdown",
310310
"metadata": {},
311311
"source": [
312-
"## 4. Saving, loading and restarting\n",
312+
"## 4. Saving and loading the optimizer\n",
313313
"\n",
314-
"By default you can follow the progress of your optimization by setting `verbose>0` when instantiating the `BayesianOptimization` object. If you need more control over logging/alerting you will need to use an observer. For more information about observers checkout the advanced tour notebook. Here we will only see how to use the native `JSONLogger` object to save to and load progress from files.\n",
314+
"The optimizer state can be saved to a file and loaded from a file. This is useful for continuing an optimization from a previous state, or for analyzing the optimization history without running the optimizer again.\n",
315315
"\n",
316-
"### 4.1 Saving progress"
317-
]
318-
},
319-
{
320-
"cell_type": "code",
321-
"execution_count": 14,
322-
"metadata": {},
323-
"outputs": [],
324-
"source": [
325-
"from bayes_opt.logger import JSONLogger\n",
326-
"from bayes_opt.event import Events"
327-
]
328-
},
329-
{
330-
"cell_type": "markdown",
331-
"metadata": {},
332-
"source": [
333-
"The observer paradigm works by:\n",
334-
"1. Instantiating an observer object.\n",
335-
"2. Tying the observer object to a particular event fired by an optimizer.\n",
336-
"\n",
337-
"The `BayesianOptimization` object fires a number of internal events during optimization, in particular, every time it probes the function and obtains a new parameter-target combination it will fire an `Events.OPTIMIZATION_STEP` event, which our logger will listen to.\n",
338-
"\n",
339-
"**Caveat:** The logger will not look back at previously probed points."
340-
]
341-
},
342-
{
343-
"cell_type": "code",
344-
"execution_count": 15,
345-
"metadata": {},
346-
"outputs": [],
347-
"source": [
348-
"logger = JSONLogger(path=\"./logs.log\")\n",
349-
"optimizer.subscribe(Events.OPTIMIZATION_STEP, logger)"
350-
]
351-
},
352-
{
353-
"cell_type": "code",
354-
"execution_count": 16,
355-
"metadata": {},
356-
"outputs": [
357-
{
358-
"name": "stdout",
359-
"output_type": "stream",
360-
"text": [
361-
"| iter | target | x | y |\n",
362-
"-------------------------------------------------\n",
363-
"| \u001b[39m13 \u001b[39m | \u001b[39m-2.96 \u001b[39m | \u001b[39m-1.989407\u001b[39m | \u001b[39m0.9536339\u001b[39m |\n",
364-
"| \u001b[39m14 \u001b[39m | \u001b[39m-0.7135 \u001b[39m | \u001b[39m1.0509704\u001b[39m | \u001b[39m1.7803462\u001b[39m |\n",
365-
"| \u001b[39m15 \u001b[39m | \u001b[39m-18.33 \u001b[39m | \u001b[39m-1.976933\u001b[39m | \u001b[39m-2.927535\u001b[39m |\n",
366-
"| \u001b[35m16 \u001b[39m | \u001b[35m0.9097 \u001b[39m | \u001b[35m-0.228312\u001b[39m | \u001b[35m0.8046706\u001b[39m |\n",
367-
"| \u001b[35m17 \u001b[39m | \u001b[35m0.913 \u001b[39m | \u001b[35m0.2069253\u001b[39m | \u001b[35m1.2101397\u001b[39m |\n",
368-
"=================================================\n"
369-
]
370-
}
371-
],
372-
"source": [
373-
"optimizer.maximize(\n",
374-
" init_points=2,\n",
375-
" n_iter=3,\n",
376-
")"
377-
]
378-
},
379-
{
380-
"cell_type": "markdown",
381-
"metadata": {},
382-
"source": [
383-
"### 4.2 Loading progress\n",
384-
"\n",
385-
"Naturally, if you stored progress you will be able to load that onto a new instance of `BayesianOptimization`. The easiest way to do it is by invoking the `load_logs` function, from the `util` submodule."
386-
]
387-
},
388-
{
389-
"cell_type": "code",
390-
"execution_count": 17,
391-
"metadata": {},
392-
"outputs": [],
393-
"source": [
394-
"from bayes_opt.util import load_logs"
395-
]
396-
},
397-
{
398-
"cell_type": "code",
399-
"execution_count": 18,
400-
"metadata": {},
401-
"outputs": [
402-
{
403-
"name": "stdout",
404-
"output_type": "stream",
405-
"text": [
406-
"0\n"
407-
]
408-
}
409-
],
410-
"source": [
411-
"new_optimizer = BayesianOptimization(\n",
412-
" f=black_box_function,\n",
413-
" pbounds={\"x\": (-3, 3), \"y\": (-3, 3)},\n",
414-
" verbose=2,\n",
415-
" random_state=7,\n",
416-
")\n",
417-
"print(len(new_optimizer.space))"
418-
]
419-
},
420-
{
421-
"cell_type": "code",
422-
"execution_count": 19,
423-
"metadata": {},
424-
"outputs": [],
425-
"source": [
426-
"load_logs(new_optimizer, logs=[\"./logs.log\"]);"
427-
]
428-
},
429-
{
430-
"cell_type": "code",
431-
"execution_count": 20,
432-
"metadata": {},
433-
"outputs": [
434-
{
435-
"name": "stdout",
436-
"output_type": "stream",
437-
"text": [
438-
"New optimizer is now aware of 5 points.\n"
439-
]
440-
}
441-
],
442-
"source": [
443-
"print(\"New optimizer is now aware of {} points.\".format(len(new_optimizer.space)))"
444-
]
445-
},
446-
{
447-
"cell_type": "code",
448-
"execution_count": 21,
449-
"metadata": {},
450-
"outputs": [
451-
{
452-
"name": "stdout",
453-
"output_type": "stream",
454-
"text": [
455-
"| iter | target | x | y |\n",
456-
"-------------------------------------------------\n",
457-
"| \u001b[39m1 \u001b[39m | \u001b[39m-14.44 \u001b[39m | \u001b[39m2.9959766\u001b[39m | \u001b[39m-1.541659\u001b[39m |\n",
458-
"| \u001b[39m2 \u001b[39m | \u001b[39m-3.938 \u001b[39m | \u001b[39m-0.992603\u001b[39m | \u001b[39m2.9881975\u001b[39m |\n",
459-
"| \u001b[39m3 \u001b[39m | \u001b[39m-11.67 \u001b[39m | \u001b[39m2.9842190\u001b[39m | \u001b[39m2.9398042\u001b[39m |\n",
460-
"| \u001b[39m4 \u001b[39m | \u001b[39m-11.43 \u001b[39m | \u001b[39m-2.966518\u001b[39m | \u001b[39m2.9062210\u001b[39m |\n",
461-
"| \u001b[39m5 \u001b[39m | \u001b[39m0.3045 \u001b[39m | \u001b[39m-0.564519\u001b[39m | \u001b[39m1.6138208\u001b[39m |\n",
462-
"| \u001b[39m6 \u001b[39m | \u001b[39m-3.176 \u001b[39m | \u001b[39m0.4898552\u001b[39m | \u001b[39m2.9838862\u001b[39m |\n",
463-
"| \u001b[39m7 \u001b[39m | \u001b[39m0.05155 \u001b[39m | \u001b[39m0.7608462\u001b[39m | \u001b[39m0.3920796\u001b[39m |\n",
464-
"| \u001b[39m8 \u001b[39m | \u001b[39m-0.2096 \u001b[39m | \u001b[39m-0.196874\u001b[39m | \u001b[39m-0.082066\u001b[39m |\n",
465-
"| \u001b[39m9 \u001b[39m | \u001b[39m0.822 \u001b[39m | \u001b[39m0.2125014\u001b[39m | \u001b[39m0.6354894\u001b[39m |\n",
466-
"| \u001b[39m10 \u001b[39m | \u001b[39m0.2598 \u001b[39m | \u001b[39m-0.769932\u001b[39m | \u001b[39m0.6160238\u001b[39m |\n",
467-
"=================================================\n"
468-
]
469-
}
470-
],
471-
"source": [
472-
"new_optimizer.maximize(\n",
473-
" init_points=0,\n",
474-
" n_iter=10,\n",
475-
")"
476-
]
477-
},
478-
{
479-
"cell_type": "markdown",
480-
"metadata": {},
481-
"source": [
482-
"## 5. Saving and loading the optimizer state\n",
483-
"\n",
484-
"The optimizer state can be saved to a file and loaded from a file. This is useful for continuing an optimization from a previous state, or for analyzing the optimization history without running the optimizer again."
316+
"Note: if you are using your own custom acquisition function, you will need to save and load the acquisition function state as well. This is done by calling the `get_acquisition_params` and `set_acquisition_params` methods of the acquisition function. See the acquisition function documentation for more information."
485317
]
486318
},
487319
{
488320
"cell_type": "markdown",
489321
"metadata": {},
490322
"source": [
491-
"### 5.1 Saving the optimizer state\n",
323+
"### 4.1 Saving the optimizer state\n",
492324
"\n",
493325
"The optimizer state can be saved to a file using the `save_state` method.\n",
494326
"optimizer.save_state(\"./optimizer_state.json\")"
@@ -507,7 +339,7 @@
507339
"cell_type": "markdown",
508340
"metadata": {},
509341
"source": [
510-
"## 5.2 Loading the optimizer state\n",
342+
"## 4.2 Loading the optimizer state\n",
511343
"\n",
512344
"To load with a previously saved state, pass the path of your saved state file to the `load_state_path` parameter. Note that if you've changed the bounds of your parameters, you'll need to pass the updated bounds to the new optimizer.\n"
513345
]

0 commit comments

Comments
 (0)