Skip to content

Releases: CyberAgentAILab/cmaes

v0.13.0

28 Mar 07:48
a9b0214

Choose a tag to compare

New Algorithms

COMO-CatCMA with Margin [Hamano et al. 2026]

COMO-CatCMA with Margin (COMO-CatCMAwM) performs multi-objective mixed-variable optimization by coordinating multiple CatCMAwM optimizers. It currently supports two-objective problems; support for three or more objectives is planned.

import numpy as np
from cmaes import COMOCatCMAwM


def DSIntLFTL(x, z, c, cat_num):
    Sphere1 = sum((x / 10) ** 2) / len(x)
    Sphere2 = sum((x / 10 - 1) ** 2) / len(x)
    SphereInt1 = sum((z / 10) ** 2) / len(z)
    SphereInt2 = sum((z / 10 - 1) ** 2) / len(z)
    c_idx = c.argmax(axis=1)
    LF = (len(c) - (c_idx == 0).cumprod().sum()) / len(c)
    TL = (len(c) - (c_idx == np.asarray(cat_num) - 1)[::-1].cumprod().sum()) / len(c)
    obj1 = Sphere1 + SphereInt1 + LF
    obj2 = Sphere2 + SphereInt2 + TL
    return [obj1, obj2]


if __name__ == "__main__":
    # [lower_bound, upper_bound] for each continuous variable
    X = [[-5, 15]] * 3
    # possible values for each integer variable
    Z = [range(-5, 16)] * 3
    # number of categories for each categorical variable
    C = [5] * 3

    optimizer = COMOCatCMAwM(x_space=X, z_space=Z, c_space=C)

    evals = 0
    while evals < 7000:
        solutions = []
        for sol in optimizer.ask_iter():
            value = DSIntLFTL(sol.x, sol.z, sol.c, C)
            evals += 1
            solutions.append((sol, value))
        optimizer.tell(solutions)
        print(evals, optimizer.incumbent_objectives)

References

What's Changed

Full Changelog: v0.12.0...v0.13.0
PyPI: https://pypi.org/project/cmaes/

v0.12.0

23 Jul 07:08
e4bbf18

Choose a tag to compare

New Algorithms

CatCMA with Margin (GECCO2025) by @ha-mano

CatCMA with Margin (CatCMAwM) is a method for mixed-variable optimization problems, simultaneously optimizing continuous, integer, and categorical variables. CatCMAwM extends CatCMA by introducing a novel integer handling mechanism, and supports arbitrary combinations of continuous, integer, and categorical variables in a unified framework.

CatCMAwM

Source code
import numpy as np
from cmaes import CatCMAwM


def SphereIntCOM(x, z, c):
    return sum(x * x) + sum(z * z) + len(c) - sum(c[:, 0])


def SphereInt(x, z):
    return sum(x * x) + sum(z * z)


def SphereCOM(x, c):
    return sum(x * x) + len(c) - sum(c[:, 0])


def f_cont_int_cat():
    # [lower_bound, upper_bound] for each continuous variable
    X = [[-5, 5], [-5, 5]]
    # possible values for each integer variable
    Z = [[-1, 0, 1], [-2, -1, 0, 1, 2]]
    # number of categories for each categorical variable
    C = [3, 3]

    optimizer = CatCMAwM(x_space=X, z_space=Z, c_space=C)

    for generation in range(50):
        solutions = []
        for _ in range(optimizer.population_size):
            sol = optimizer.ask()
            value = SphereIntCOM(sol.x, sol.z, sol.c)
            solutions.append((sol, value))
            print(f"#{generation} {sol} evaluation: {value}")
        optimizer.tell(solutions)


def f_cont_int():
    # [lower_bound, upper_bound] for each continuous variable
    X = [[-np.inf, np.inf], [-np.inf, np.inf]]
    # possible values for each integer variable
    Z = [[-2, -1, 0, 1, 2], [-2, -1, 0, 1, 2]]

    # initial distribution parameters (Optional)
    # If you know a promising solution for X and Z, set init_mean to that value.
    init_mean = np.ones(len(X) + len(Z))
    init_cov = np.diag(np.ones(len(X) + len(Z)))
    init_sigma = 1.0

    optimizer = CatCMAwM(
        x_space=X, z_space=Z, mean=init_mean, cov=init_cov, sigma=init_sigma
    )

    for generation in range(50):
        solutions = []
        for _ in range(optimizer.population_size):
            sol = optimizer.ask()
            value = SphereInt(sol.x, sol.z)
            solutions.append((sol, value))
            print(f"#{generation} {sol} evaluation: {value}")
        optimizer.tell(solutions)


def f_cont_cat():
    # [lower_bound, upper_bound] for each continuous variable
    X = [[-5, 5], [-5, 5]]
    # number of categories for each categorical variable
    C = [3, 5]

    # initial distribution parameters (Optional)
    init_cat_param = np.array(
        [
            [0.5, 0.3, 0.2, 0.0, 0.0],  # zero-padded at the end
            [0.2, 0.2, 0.2, 0.2, 0.2],  # each row must sum to 1
        ]
    )

    optimizer = CatCMAwM(x_space=X, c_space=C, cat_param=init_cat_param)

    for generation in range(50):
        solutions = []
        for _ in range(optimizer.population_size):
            sol = optimizer.ask()
            value = SphereCOM(sol.x, sol.c)
            solutions.append((sol, value))
            print(f"#{generation} {sol} evaluation: {value}")
        optimizer.tell(solutions)


if __name__ == "__main__":
    f_cont_int_cat()
    # f_cont_int()
    # f_cont_cat()

We recommend using CatCMAwM for continuous+integer and continuous+categorical settings. In particular, [Hamano et al. 2025] shows that CatCMAwM outperforms CMA-ES with Margin in mixed-integer scenarios. Therefore, we suggest CatCMAwM in place of CMA-ES with Margin or CatCMA.

CMA-ES-SoP by (PPSN 2024) @kento031

CMA-ES on sets of points (CMA-ES-SoP) is a variant of CMA-ES for optimization on sets of points. In the optimization on sets of points, the search space consists of several disjoint subspaces containing multiple possible points where the objective function value can be computed. In the mixed-variable cases, some subspaces are continuous spaces. Note that the discrete subspaces with more than five dimensions require computational cost for the construction of the Voronoi diagrams.

Source code
import numpy as np
from cmaes.cma_sop import CMASoP

# numbers of dimensions in each subspace
subspace_dim_list = [2, 3, 5]
cont_dim = 10

# numbers of points in each subspace
point_num_list = [10, 20, 40]

# number of total dimensions
dim = int(np.sum(subspace_dim_list) + cont_dim)

# objective function
def quadratic(x):
    coef = 1000 ** (np.arange(dim) / float(dim - 1))
    return np.sum((coef * x) ** 2)

# sets_of_points (on [-5, 5])
discrete_subspace_num = len(subspace_dim_list)
sets_of_points = [(
    2 * np.random.rand(point_num_list[i], subspace_dim_list[i]) - 1) * 5
for i in range(discrete_subspace_num)]

# add the optimal solution (for benchmark function)
for i in range(discrete_subspace_num):
    sets_of_points[i][-1] = np.zeros(subspace_dim_list[i])
    np.random.shuffle(sets_of_points[i])

# optimizer (CMA-ES-SoP)
optimizer = CMASoP(
    sets_of_points=sets_of_points,
    mean=np.random.rand(dim) * 4 + 1,
    sigma=2.0,
)

best_eval = np.inf
eval_count = 0

for generation in range(400):
    solutions = []
    for _ in range(optimizer.population_size):
        # Ask a parameter
        x, enc_x = optimizer.ask()
        value = quadratic(enc_x)

        # save best eval
        best_eval = np.min((best_eval, value))
        eval_count += 1

        solutions.append((x, value))

    # Tell evaluation values.
    optimizer.tell(solutions)

    print(f"#{generation} ({best_eval} {eval_count})")

    if best_eval < 1e-4 or optimizer.should_stop():
        break

Maximum a Posteriori CMA-ES (PPSN 2024) by @ha-mano

MAP-CMA is a method that is introduced to interpret the rank-one update in the CMA-ES from the perspective of the natural gradient.
The rank-one update derived from the natural gradient perspective is extensible, and an additional term, called momentum update, appears in the update of the mean vector.
The performance of MAP-CMA is not significantly different from that of CMA-ES, as the primary motivation for MAP-CMA comes from the theoretical understanding of CMA-ES.

Source code
import numpy as np
from cmaes import MAPCMA


def rosenbrock(x):
    dim = len(x)
    if dim < 2:
        raise ValueError("dimension must be greater one")
    return sum(100 * (x[:-1] ** 2 - x[1:]) ** 2 + (x[:-1] - 1) ** 2)


if __name__ == "__main__":
    dim = 20
    optimizer = MAPCMA(mean=np.zeros(dim), sigma=0.5, momentum_r=dim)
    print(" evals    f(x)")
    print("======  ==========")

    evals = 0
    while True:
        solutions = []
        for _ in range(optimizer.population_size):
            x = optimizer.ask()
            value = rosenbrock(x)
            evals += 1
            solutions.append((x, value))
            if evals % 1000 == 0:
                print(f"{evals:5d}  {value:10.5f}")
        optimizer.tell(solutions)

        if optimizer.should_stop():
            break

Safe CMA-ES (GECCO 2024) by @kento031

Safe CMA-ES is a variant of CMA-ES for safe optimization. Safe optimization is formulated as a special type of constrained optimization problem aiming to solve the optimization problem with fewer evaluations of the solutions whose safety function values exceed the safety thresholds. The safe CMA-ES requires safe seeds that do not violate the safety constraints. Note that the safe CMA-ES is designed for noiseless safe optimization. This module needs torch and gpytorch.

Source code
import numpy as np
from cmaes.safe_cma import SafeCMA

# objective function
def quadratic(x):
    coef = 1000 ** (np.arange(dim) / float(dim - 1)) 
    return np.sum((x * coef) ** 2)

# safety function
def safe_function(x):
    return x[0]

"""
    example with a single safety function
"""
if __name__ == "__main__":
    # number of dimensions
    dim = 5

    # safe seeds
    safe_seeds_num = 10
    safe_seeds = (np.random.rand(safe_seeds_num, dim) * 2 - 1) * 5
    safe_seeds[:,0] = - np.abs(safe_seeds[:,0])

    # evaluation of safe seeds (with a single safety function)
    seeds_evals = np.array([ quadratic(x) for x in safe_seeds ])
    seeds_safe_evals = np.stack([ [safe_function(x)] for x in safe_seeds ])
    safety_threshold = np.array([0])

    # optimizer (safe CMA-ES)
    optimizer = SafeCMA(
        sigma=1., 
        safety_threshold=safety_threshold, 
        safe_seeds=safe_seeds,
        seeds_evals=seeds_evals,
        seeds_safe_evals=seeds_safe_evals,
    )

    unsafe_eval_counts = 0
    best_eval = np.inf

    for generation in range(400):
        solutions = []
        for _ in range(optimizer.population_size):
            # Ask a parameter
            x = optimizer.ask()
            value = quadratic(x)
            safe_value = np.array([safe_function(x)])

            # save best eval
            best_eval = np.min((best_eval, value))
            unsafe_eval_counts += (safe_value > safety_threshold)

            solutions.append((x, value, safe_value))

        # Tell evaluation values.
        optimizer.tell(solutions)

        print(f"#{generation} ({best_eval} {unsafe_eval_counts})")
        
        if optimizer.should_stop():
            break

What's Changed

Read more

v0.11.1

13 Aug 07:03
a6e791a

Choose a tag to compare

What's Changed

Full Changelog: v0.11.0...v0.11.1

PyPI: https://pypi.org/project/cmaes/0.11.1/

v0.11.0

01 Aug 01:05
7513df8

Choose a tag to compare

Highlights

CatCMA [Hamano+, GECCO2024]

arXiv: https://arxiv.org/pdf/2405.09962

CatCMA is a method for mixed-category optimization problems, which is the problem of simultaneously optimizing continuous and categorical variables. CatCMA employs the joint probability distribution of multivariate Gaussian and categorical distributions as the search distribution.

CatCMA

Usage is like below:

import numpy as np
from cmaes import CatCMA


def sphere_com(x, c):
    dim_co = len(x)
    dim_ca = len(c)
    if dim_co < 2:
        raise ValueError("dimension must be greater one")
    sphere = sum(x * x)
    com = dim_ca - sum(c[:, 0])
    return sphere + com


def rosenbrock_clo(x, c):
    dim_co = len(x)
    dim_ca = len(c)
    if dim_co < 2:
        raise ValueError("dimension must be greater one")
    rosenbrock = sum(100 * (x[:-1] ** 2 - x[1:]) ** 2 + (x[:-1] - 1) ** 2)
    clo = dim_ca - (c[:, 0].argmin() + c[:, 0].prod() * dim_ca)
    return rosenbrock + clo


def mc_proximity(x, c, cat_num):
    dim_co = len(x)
    dim_ca = len(c)
    if dim_co < 2:
        raise ValueError("dimension must be greater one")
    if dim_co != dim_ca:
        raise ValueError(
            "number of dimensions of continuous and categorical variables "
            "must be equal in mc_proximity"
        )

    c_index = np.argmax(c, axis=1) / cat_num
    return sum((x - c_index) ** 2) + sum(c_index)


if __name__ == "__main__":
    cont_dim = 5
    cat_dim = 5
    cat_num = np.array([3, 4, 5, 5, 5])
    # cat_num = 3 * np.ones(cat_dim, dtype=np.int64)
    optimizer = CatCMA(mean=3.0 * np.ones(cont_dim), sigma=1.0, cat_num=cat_num)

    for generation in range(200):
        solutions = []
        for _ in range(optimizer.population_size):
            x, c = optimizer.ask()
            value = mc_proximity(x, c, cat_num)
            if generation % 10 == 0:
                print(f"#{generation} {value}")
            solutions.append(((x, c), value))
        optimizer.tell(solutions)

        if optimizer.should_stop():
            break

What's Changed

New Contributors

Full Changelog: v0.10.0...v0.11.0

v0.10.0

19 Jul 04:11
4ebcdbd

Choose a tag to compare

What's Changed

Full Changelog: v0.9.1...v0.10.0

v0.9.1

06 Jan 06:06
f7ccf97

Choose a tag to compare

What's Changed

  • Remove tox.ini by @c-bata in #131
  • Fix a broken link to Optuna's documentation by @c-bata in #132
  • Drop Python 3.6 support. by @c-bata in #130
  • Reuse CMA inside CMAwM by @knshnb in #133
  • Add rng related methods by @knshnb in #135
  • Fix correction of out-of-range continuous params of CMAwM by @knshnb in #134
  • Fix correction of out-of-range discrete params of CMAwM by @knshnb in #136
  • Avoid to use typing.List, typing.Dict, and typing.Tuple. by @c-bata in #139
  • Check feasibility of sampled discrete parameters in CMAwM by @knshnb in #140
  • Refactor CMAwM by @knshnb in #141
  • Add a test case for no discrete spaces. by @c-bata in #143
  • Allow no discrete spaces in CMAwM by @knshnb in #142
  • Remove warnings in CMAwM class by @c-bata in #144
  • Revert handling of infeasible discrete parameters by @knshnb in #145
  • Bump the version up to v0.9.1 by @c-bata in #138

Full Changelog: v0.9.0...v0.9.1

v0.9.0

08 Nov 08:36
8f7a584

Choose a tag to compare

Highlights

CMA-ES with Margin is now available. It introduces a lower bound on the marginal probability associated with each discrete dimension so that samples can avoid being fixed to a single point. It can be applied to mixed spaces of continuous (float) and discrete (including integer and binary). This algorithm is proposed by Hamano, Saito, @nomuramasahir0 (a maintainer of this library), and Shirakawa, has been nominated as best paper at GECCO'22 ENUM track.

R. Hamano, S. Saito, M. Nomura, S. Shirakawa, CMA-ES with Margin: Lower-Bounding Marginal Probability for Mixed-Integer Black-Box Optimization, GECCO, 2022.

CMA-ES CMA-ESwM
CMA-ES CMA-ESwM

The above figures are taken from EvoConJP/CMA-ES_with_Margin.

Please check out the following examples for the usage.

What's Changed

New Contributors

References

Full Changelog: v0.8.2...v0.9.0

v0.8.2

19 Feb 15:39

Choose a tag to compare

CHANGES

  • Fix dimensions of Warm starting CMA-ES (#98).

v0.8.1

10 Feb 10:01

Choose a tag to compare

CHANGES

  • Unset version constraint of numpy.
  • Remove extra_requires for development.

v0.8.0

03 Feb 09:30

Choose a tag to compare

CHANGES

New features

Warm-starting CMA-ES is now available. It estimates a promising distribution, then generates parameters of the multivariate gaussian distribution used for the initialization of CMA-ES, so that you can exploit optimization results from a similar optimization task. This algorithm is proposed by @nmasahiro, a maintainer of this library, and accepted at AAAI 2021.

Rot Ellipsoid Ellipsoid
rot-ellipsoid quadratic

Link