Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
34 commits
Select commit Hold shift + click to select a range
1f0166b
Implement Weights and Biases logger
AngelFP Feb 23, 2024
e90b03d
Add `data` property to `Trial`
AngelFP Feb 23, 2024
80fd2ed
Add `logger` argument to `Exploration`.
AngelFP Feb 23, 2024
ae092d2
Add test
AngelFP Feb 23, 2024
c2b64e3
Merge branch 'main' into feature/wandb_logger
AngelFP Feb 23, 2024
cb92d1a
Add logger back to generator after call to `libE`
AngelFP Feb 23, 2024
d4e550a
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 27, 2024
01b8a03
Merge branch 'main' into feature/wandb_logger
AngelFP Feb 27, 2024
1aab827
Merge branch 'main' into feature/wandb_logger
AngelFP Mar 7, 2024
4ecfeb6
Add `wandb` to test dependencies
AngelFP Mar 8, 2024
b15371a
Update tests with W&B API key
AngelFP Mar 8, 2024
73732a0
Add docstrings
AngelFP Mar 8, 2024
4c58cc5
Fix bug
AngelFP Mar 8, 2024
319c2fd
Add `wandb` to RTD dependencies
AngelFP Mar 8, 2024
969c347
Add docstrings
AngelFP Mar 8, 2024
371d578
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Mar 8, 2024
d100706
Fix docstrings
AngelFP Mar 8, 2024
c22485a
Add logging docs
AngelFP Mar 8, 2024
c4ffa69
Fix docstring
AngelFP Mar 8, 2024
e8e06e5
Rename parameters
AngelFP Mar 8, 2024
cae0689
Fix links and docs
AngelFP Mar 8, 2024
5ba0d14
Merge branch 'feature/wandb_logger' of https://github.com/optimas-org…
AngelFP Mar 8, 2024
f29e0bd
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Mar 8, 2024
02be9d3
Fix docstring
AngelFP Mar 8, 2024
da07308
Merge branch 'feature/wandb_logger' of https://github.com/optimas-org…
AngelFP Mar 8, 2024
58c1df4
Warn if `wandb` is not installed
AngelFP Mar 8, 2024
4932d09
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Mar 8, 2024
9732815
Improve test description
AngelFP Mar 8, 2024
b871a0a
Merge branch 'feature/wandb_logger' of https://github.com/optimas-org…
AngelFP Mar 8, 2024
6b06786
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Mar 12, 2024
83076f2
Merge branch 'main' into feature/wandb_logger
AngelFP Apr 23, 2024
a4dc9d6
Merge branch 'main' into feature/wandb_logger
AngelFP May 10, 2024
ed2da3d
Merge branch 'main' into feature/wandb_logger
AngelFP May 23, 2024
e8cbc8a
Merge branch 'main' into feature/wandb_logger
AngelFP Jun 11, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .github/workflows/unix-openmpi.yml
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,8 @@ jobs:
pip install .[test]
- shell: bash -l {0}
name: Run unit tests with openMPI
env:
WANDB_API_KEY: ${{ secrets.WANDB_API_KEY }}
run: |
python -m pytest tests/
mpirun -np 3 --oversubscribe python -m pytest --with-mpi tests/test_grid_sampling_mpi.py
2 changes: 2 additions & 0 deletions .github/workflows/unix.yml
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,8 @@ jobs:
pip install .[test]
- shell: bash -l {0}
name: Run unit tests with MPICH
env:
WANDB_API_KEY: ${{ secrets.WANDB_API_KEY }}
run: |
python -m pytest tests/
mpirun -np 3 python -m pytest --with-mpi tests/test_grid_sampling_mpi.py
1 change: 1 addition & 0 deletions doc/environment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -15,3 +15,4 @@ dependencies:
- sphinx-copybutton
- sphinx-design
- sphinx-gallery
- wandb
1 change: 1 addition & 0 deletions doc/source/api/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,4 +11,5 @@ This reference manual details all classes included in optimas.
evaluators
exploration
diagnostics
loggers
utils
9 changes: 9 additions & 0 deletions doc/source/api/loggers.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
Loggers
=======

.. currentmodule:: optimas.loggers

.. autosummary::
:toctree: _autosummary

WandBLogger
155 changes: 155 additions & 0 deletions doc/source/user_guide/advanced_usage/log_to_wandb.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,155 @@
Log an ``Exploration`` to Weights and Biases
============================================

`Weights and Biases <https://wandb.ai/site>`_ (W&B) is a powerful tool for
tracking and visualizing
machine learning experiments. Optimas has built-in support for logging to W&B,
allowing users to easily track and compare the performance of different
optimization runs.

This documentation provides a guide on how to use the
:class:`~optimas.loggers.WandBLogger` class
within Optimas to log an :class:`~optimas.explorations.Exploration`
to Weights and Biases.


Basic example
-------------

To log an :class:`~optimas.explorations.Exploration` to Weights and Biases,
you first need to instantiate
a :class:`~optimas.loggers.WandBLogger` object. This object requires several
parameters, including
your W&B API key, the project name, and optionally, a run name, run ID,
data types for specific parameters, and a user-defined function for
custom logs. For example:

.. code-block:: python

from optimas.loggers import WandBLogger

logger = WandBLogger(
api_key="your_wandb_api_key",
project="your_project_name",
run="example_run", # optional
)

This logger can then be passed to an ``Exploration``, such as in the example
below:

.. code-block:: python

from optimas.explorations import Exploration
from optimas.generators import RandomSamplingGenerator
from optimas.evaluators import FunctionEvaluator
from optimas.loggers import WandBLogger
from optimas.core import VaryingParameter, Objective


# Define the function to be optimized
def objective_function(inputs, outputs):
x = inputs["x"]
y = inputs["y"]
outputs["result"] = x**2 + y**2


# Define the evaluator
evaluator = FunctionEvaluator(objective_function)

# Define the generator
generator = RandomSamplingGenerator(
parameters=[
VaryingParameter(name="x", lower_bound=-10, upper_bound=10),
VaryingParameter(name="y", lower_bound=-10, upper_bound=10),
],
objectives=[Objective(name="result", minimize=True)],
)

# Instantiate the WandBLogger
logger = WandBLogger(
api_key="your_wandb_api_key",
project="your_project_name",
run="example_run",
)

# Create the Exploration and pass the logger and evaluator
exploration = Exploration(
generator=generator, evaluator=evaluator, logger=logger
)

# Run the exploration
exploration.run(n_evals=100)


Customizing the data type of the logger arguments
-------------------------------------------------

The ``data_types`` argument allows you to specify the W&B
`data type <https://docs.wandb.ai/ref/python/data-types/>`_ for specific
parameters when logging to Weights and Biases. This is useful for ensuring
that your data is logged in the desired format. The ``data_types`` should be
a dictionary where the keys are the names of the parameters you wish to
log, and the values are dictionaries containing the ``type`` and
``type_kwargs`` for each parameter.

For example, if you have defined two analyzed parameters called
``"parameter_1"`` and ``"parameter_2"`` that at each evaluation store
an image or matplotlib
figure and a numpy array, respectively, you can tell the logger to log the
first one as an image, and the second as a histogram:

.. code-block:: python

data_types = {
"parameter_1": {"type": wandb.Image, "type_kwargs": {}},
"parameter_2": {"type": wandb.Histogram, "type_kwargs": {}},
}

logger = WandBLogger(
api_key="your_wandb_api_key",
project="your_project_name",
data_types=data_types,
# Other parameters...
)


Defining custom logs
--------------------

By default, the ``WandBLogger`` will log the varying parameters, objectives
and analyzed parameters of the ``Exploration``.
If you want to include your own custom logs, you can provide a
``custom_logs`` function that generates them.
This function will be called every time a trial evaluation finishes.

The ``custom_logs`` function should take two arguments, which correspond to the
most
recently evaluated :class:`~optimas.core.Trial` and the currently active
``Generator``.
You do not need to use them, but they are there for convenience.
The function must then
return a dictionary with the appropriate shape to be given to ``wandb.log``.

Here's an example of how to define a ``custom_logs`` function:

.. code-block:: python

def custom_logs(trial, generator):
# Example: Log the best score so far
best_score = None
trials = generator.completed_trials
for trial in trials:
score = trial.data["result"]
if best_score is None:
best_score = score
elif score < best_score:
best_score = score
return {"Best Score": best_score}


logger = WandBLogger(
api_key="your_wandb_api_key",
project="your_project_name",
custom_logs=custom_logs,
# Other parameters...
)
1 change: 1 addition & 0 deletions doc/source/user_guide/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@ User guide
:caption: Advanced usage

advanced_usage/build_gp_surrogates
advanced_usage/log_to_wandb

.. toctree::
:maxdepth: 1
Expand Down
15 changes: 15 additions & 0 deletions optimas/core/trial.py
Original file line number Diff line number Diff line change
Expand Up @@ -152,6 +152,21 @@ def evaluated(self) -> bool:
"""Determine whether the trial has been evaluated."""
return self.completed or self.failed

@property
def data(self) -> Dict:
"""Get a dictionary with all the trial data."""
vp_dict = self.parameters_as_dict()
ap_dict = self.analyzed_parameters_as_dict()
ob_dict = self.objectives_as_dict()
# Do not report uncertainty. We haven't yet decided about how to
# report it in the history.
for key, val in ob_dict.items():
ob_dict[key] = val[0]
for key, val in ap_dict.items():
ap_dict[key] = val[0]
data = {**vp_dict, **ob_dict, **ap_dict}
return data

def mark_as(self, status) -> None:
"""Set trial status.

Expand Down
19 changes: 16 additions & 3 deletions optimas/explorations/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@
from optimas.evaluators.function_evaluator import FunctionEvaluator
from optimas.utils.logger import get_logger
from optimas.utils.other import convert_to_dataframe
from optimas.loggers.base import Logger


logger = get_logger(__name__)
Expand Down Expand Up @@ -78,6 +79,10 @@ class Exploration:
manager and ``N-1`` simulation workers. In this case, the
``sim_workers`` parameter is ignored. By default, ``'local'`` mode
is used.
logger : Logger, optional
A custom logger that is informed of every completed trial and can
report on the results. Currently, a Weights and Biases logger is
available.

"""

Expand All @@ -93,6 +98,7 @@ def __init__(
exploration_dir_path: Optional[str] = "./exploration",
resume: Optional[bool] = False,
libe_comms: Optional[Literal["local", "threads", "mpi"]] = "local",
logger: Optional[Logger] = None,
) -> None:
# For backward compatibility, check for old threading name.
if libe_comms == "local_threading":
Expand Down Expand Up @@ -125,6 +131,10 @@ def __init__(
self._libe_history = self._create_libe_history()
self._load_history(history, resume)
self._is_manager = self._set_manager(self.libe_comms, self.libE_specs)
self._logger = logger
if self._logger is not None:
self._logger.initialize(self)
self.generator._set_logger(self._logger)

@property
def is_manager(self):
Expand Down Expand Up @@ -194,7 +204,7 @@ def run(self, n_evals: Optional[int] = None) -> None:
# Get gen_specs and sim_specs.
run_params = self.evaluator.get_run_params()
gen_specs = self.generator.get_gen_specs(
self.sim_workers, run_params, sim_max
self.sim_workers, run_params, sim_max, self.libe_comms
)
sim_specs = self.evaluator.get_sim_specs(
self.generator.varying_parameters,
Expand Down Expand Up @@ -417,7 +427,10 @@ def attach_evaluations(
# Fill in new rows.
for field in fields:
if field in history_new.dtype.names:
history_new[field] = evaluation_data[field]
# Converting to list prevent the error
# "ValueError: setting an array element with a sequence"
# when the field contains an array.
history_new[field] = evaluation_data[field].to_list()

if not is_history:
current_time = time.time()
Expand Down Expand Up @@ -507,7 +520,7 @@ def _create_libe_history(self) -> History:
"""Initialize an empty libEnsemble history."""
run_params = self.evaluator.get_run_params()
gen_specs = self.generator.get_gen_specs(
self.sim_workers, run_params, None
self.sim_workers, run_params, None, self.libe_comms
)
sim_specs = self.evaluator.get_sim_specs(
self.generator.varying_parameters,
Expand Down
10 changes: 8 additions & 2 deletions optimas/generators/ax/developer/multitask.py
Original file line number Diff line number Diff line change
Expand Up @@ -131,11 +131,17 @@ def __init__(
self._experiment = self._create_experiment()

def get_gen_specs(
self, sim_workers: int, run_params: Dict, sim_max: int
self,
sim_workers: int,
run_params: Dict,
max_evals: int,
libe_comms: str,
) -> Dict:
"""Get the libEnsemble gen_specs."""
# Get base specs.
gen_specs = super().get_gen_specs(sim_workers, run_params, sim_max)
gen_specs = super().get_gen_specs(
sim_workers, run_params, max_evals, libe_comms
)
# Add task to output parameters.
max_length = max([len(self.lofi_task.name), len(self.hifi_task.name)])
gen_specs["out"].append(("task", str, max_length))
Expand Down
27 changes: 25 additions & 2 deletions optimas/generators/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
from __future__ import annotations
import os
from copy import deepcopy
from typing import List, Dict, Optional, Union
from typing import List, Dict, Optional, Union, TYPE_CHECKING

import numpy as np
import pandas as pd
Expand All @@ -21,6 +21,9 @@
TrialStatus,
)

if TYPE_CHECKING:
from optimas.loggers.base import Logger

logger = get_logger(__name__)


Expand Down Expand Up @@ -114,6 +117,7 @@ def __init__(
self._queued_trials = [] # Trials queued to be given for evaluation.
self._trial_count = 0
self._check_parameters(self._varying_parameters)
self._logger = None

@property
def varying_parameters(self) -> List[VaryingParameter]:
Expand Down Expand Up @@ -150,6 +154,11 @@ def dedicated_resources(self) -> bool:
"""Get whether the generator has dedicated resources allocated."""
return self._dedicated_resources

@property
def completed_trials(self) -> List[Trial]:
"""Get list of completed trials."""
return [trial for trial in self._given_trials if trial.completed]

@property
def n_queued_trials(self) -> int:
"""Get the number of trials queued for evaluation."""
Expand Down Expand Up @@ -266,6 +275,8 @@ def tell(
else:
log_msg = f"Failed to evaluate trial {trial.index}."
logger.info(log_msg)
if self._logger is not None:
self._logger.log_trial(trial, self)
if allow_saving_model and self._save_model:
self.save_model_to_file()

Expand Down Expand Up @@ -510,7 +521,11 @@ def save_model_to_file(self) -> None:
)

def get_gen_specs(
self, sim_workers: int, run_params: Dict, max_evals: int
self,
sim_workers: int,
run_params: Dict,
max_evals: int,
libe_comms: str,
) -> Dict:
"""Get the libEnsemble gen_specs.

Expand All @@ -523,6 +538,10 @@ def get_gen_specs(
required.
max_evals : int
Maximum number of evaluations to generate.
libe_comms : {'local', 'threads', 'mpi'}, optional.
The communication mode for libEnseble. Used to determine whether
the generator is running on a thread (and therefore in shared
memory).

"""
gen_specs = {
Expand Down Expand Up @@ -613,3 +632,7 @@ def _check_parameters(self, parameters: List[VaryingParameter]):
f"{self.__class__.__name__} does not support fixing "
"the value of a VaryingParameter."
)

def _set_logger(self, logger: Logger) -> None:
"""Set the generator logger."""
self._logger = logger
4 changes: 4 additions & 0 deletions optimas/loggers/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
from .wandb_logger import WandBLogger


__all__ = ["WandBLogger"]
Loading