Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ __pycache__
*.swp
*.swo
*DS_Store
_version.py

.tox/
build/
Expand Down
3 changes: 3 additions & 0 deletions CHANGELOG
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,9 @@ The rules for CHANGELOG file:

.. inclusion-marker-changelog-start

Unreleased
----------

0.3.0 (2025/06/12)
------------------
- Add ``_BasePCov`` class (#248)
Expand Down
28 changes: 28 additions & 0 deletions CITATION.cff
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
cff-version: 1.2.0
message: "If you use scikit-matter for your work, please read and cite it as below."
title: >-
scikit-matter : a suite of generalisable machine learning methods born out of chemistry and materials science [version 2; peer review: 3 approved, 1 approved with reservations]
type: journalArticle
issue: 81
volume: 3
authors:
- family-names: Goscinski
given-names: Alexander
- family-names: Principe
given-names: Victor P.
- family-names: Fraux
given-names: Guillaume
- family-names: Kliavinek
given-names: Sergei
- family-names: Helfrecht
given-names: Benjamin A.
- family-names: Loche
given-names: Philip
- family-names: Ceriotti
given-names: Michele
- family-names: Cersonsky
given-names: Rose K.
date-published: 2023
identifiers:
- type: doi
value: 10.12688/openreseurope.15789.2
1 change: 1 addition & 0 deletions MANIFEST.in
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
graft src

include LICENSE
include CITATION.cff
include README.rst

prune docs
Expand Down
19 changes: 13 additions & 6 deletions README.rst
Original file line number Diff line number Diff line change
@@ -1,13 +1,16 @@
scikit-matter
=============
|tests| |codecov| |pypi| |conda| |docs| |doi|

|tests| |codecov| |pypi| |conda| |docs-stable| |docs-latest| |doi|

A collection of ``scikit-learn`` compatible utilities that implement methods born out of
the materials science and chemistry communities.

For details, tutorials, and examples, please have a look at our `documentation`_.
For details, tutorials, and examples, please have a look at our documentation_. We also
provide a `latest documentation`_ from the current unreleased development version.

.. _`documentation`: https://scikit-matter.readthedocs.io
.. _`documentation`: https://scikit-matter.readthedocs.io/en/v0.3/
.. _`latest documentation`: https://scikit-matter.readthedocs.io/en/latest

.. marker-installation

Expand Down Expand Up @@ -98,9 +101,13 @@ Thanks goes to all people that make scikit-matter possible:
:alt: Code coverage
:target: https://codecov.io/gh/scikit-learn-contrib/scikit-matter/

.. |docs| image:: https://img.shields.io/badge/📚_documentation-latest-sucess
:alt: Python
:target: documentation_
.. |docs-stable| image:: https://img.shields.io/badge/📚_Documentation-stable-sucess
:alt: Documentation of stable released version
:target: `documentation`_

.. |docs-latest| image:: https://img.shields.io/badge/📒_Documentation-latest-yellow.svg
:alt: Documentation of latest unreleased version
:target: `latest documentation`_

.. |pypi| image:: https://img.shields.io/pypi/v/skmatter.svg
:alt: Latest PYPI version
Expand Down
46 changes: 46 additions & 0 deletions docs/src/contributing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -212,3 +212,49 @@ properly. It should look something like this:

You're good to go! Time to submit a `pull request.
<https://github.com/lab-cosmo/scikit-matter/pulls>`_

How to Perform a Release
-------------------------

1. **Prepare a Release Pull Request**

- Based on the main branch create branch ``release-x.y.z`` and a PR.
- Ensure that all `CI tests
<https://github.com/scikit-learn-contrib/scikit-matter/actions>`_ pass.
- Optionally, run the tests locally to double-check.

2. **Update the Changelog**

- Edit the changelog located in ``CHANGELOG``:
- Add a new section for the new version, summarizing the changes based on the
PRs merged since the last release.
- Leave a placeholder section titled *Unreleased* for future updates.

3. **Merge the PR and Create a Tag**

- Merge the release PR.
- Update the ``main`` branch and check that the latest commit is the release PR with
``git log``
- Create a tag on directly the ``main`` branch.
- Push the tag to GitHub. For example for a release of version ``x.y.z``:

.. code-block:: bash

git checkout main
git pull
git tag -a vx.y.z -m "Release vx.y.z"
git push --tags

4. **Finalize the GitHub Release**

- Once the PR is merged, the CI will automatically:
- Publish the package to PyPI.
- Create a draft release on GitHub.
- Update the GitHub release notes by pasting the changelog for the version.

5. **Merge Conda Recipe Changes**

- May resolve and then merge an automatically created PR on the `conda recipe
<https://github.com/conda-forge/skmatter-feedstock>`_.
- Once thus PR is merged and the new version will be published automatically on the
`conda-forge <https://anaconda.org/conda-forge/skmatter>`_ channel.
4 changes: 0 additions & 4 deletions examples/neighbors/pamm.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,6 @@
of the H-bond motif.
"""


# %%
from typing import Callable, Union

Expand Down Expand Up @@ -150,7 +149,6 @@ def _update_cluster_cov(
idxroot: np.ndarray,
center_idx: np.ndarray,
):

if cell is not None:
cov = _get_lcov_clusterp(
len(X), nsamples, X, idxroot, center_idx[k], probs, cell
Expand Down Expand Up @@ -194,7 +192,6 @@ def _get_lcov_cluster(
probs: np.ndarray,
cell: np.ndarray,
):

ww = np.zeros(N)
normww = logsumexp(probs[clroots == idcl])
ww[clroots == idcl] = np.exp(probs[clroots == idcl] - normww)
Expand All @@ -211,7 +208,6 @@ def _get_lcov_clusterp(
probs: np.ndarray,
cell: np.ndarray,
):

ww = np.zeros(N)
totnormp = logsumexp(probs)
cov = np.zeros((x.shape[1], x.shape[1]), dtype=float)
Expand Down
3 changes: 0 additions & 3 deletions examples/neighbors/sparse-kde.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,6 @@
Here we first sample from these three Gaussians.
"""


# %%
import time

Expand Down Expand Up @@ -98,7 +97,6 @@

# %%
class GaussianMixtureModel:

def __init__(
self,
weights: np.ndarray,
Expand All @@ -116,7 +114,6 @@ def __init__(
self.norm = 1 / np.sqrt((2 * np.pi) ** self.dimension * self.cov_det)

def __call__(self, x: np.ndarray, i: int = None):

if len(x.shape) == 1:
x = x[np.newaxis, :]
if self.period is not None:
Expand Down
1 change: 0 additions & 1 deletion examples/pcovr/PCovR.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,6 @@
# %%
#


import numpy as np
from matplotlib import cm
from matplotlib import pyplot as plt
Expand Down
1 change: 1 addition & 0 deletions examples/pcovr/PCovR_Regressors.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
Choosing Different Regressors for PCovR
=======================================
"""

# %%
#
import time
Expand Down
1 change: 0 additions & 1 deletion examples/reconstruction/PlotLFRE.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,6 @@
# %%
#


import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
Expand Down
1 change: 0 additions & 1 deletion examples/reconstruction/PlotPointwiseGFRE.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,6 @@
# %%
#


import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
Expand Down
12 changes: 9 additions & 3 deletions examples/regression/Ridge2FoldCVRegularization.py
Original file line number Diff line number Diff line change
Expand Up @@ -114,11 +114,15 @@
)

sklearn_ridge_2foldcv_tikhonov = RidgeCV(
alphas=alphas, cv=cv, fit_intercept=False # remove the incluence of learning bias
alphas=alphas,
cv=cv,
fit_intercept=False, # remove the incluence of learning bias
)

sklearn_ridge_loocv_tikhonov = RidgeCV(
alphas=alphas, cv=None, fit_intercept=False # remove the incluence of learning bias
alphas=alphas,
cv=None,
fit_intercept=False, # remove the incluence of learning bias
)

# %%
Expand Down Expand Up @@ -331,7 +335,9 @@ def get_train_test_error(estimator):
)

sklearn_ridge_loocv_tikhonov = RidgeCV(
alphas=alphas, cv=None, fit_intercept=False # remove the incluence of learning bias
alphas=alphas,
cv=None,
fit_intercept=False, # remove the incluence of learning bias
)

print("skmatter 2-fold CV cutoff")
Expand Down
1 change: 1 addition & 0 deletions examples/selection/FeatureSelection.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
PCovR-Inspired Feature Selection
================================
"""

# %%
#
import numpy as np
Expand Down
1 change: 0 additions & 1 deletion examples/selection/Selectors-Pipelines.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,6 @@
# %%
#


import numpy as np
from matplotlib import pyplot as plt
from sklearn.datasets import load_diabetes
Expand Down
20 changes: 9 additions & 11 deletions pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
[build-system]
requires = [
"setuptools",
"setuptools >= 77",
"setuptools_scm >= 8",
"wheel",
]
build-backend = "setuptools.build_meta"
Expand Down Expand Up @@ -57,12 +58,18 @@ repository = "https://github.com/scikit-learn-contrib/scikit-matter"
issues = "https://github.com/scikit-learn-contrib/scikit-matterissues"
changelog = "http://scikit-matter.readthedocs.io/en/latest/changelog.html"

[tool.check-manifest]
ignore = ["src/skmatter/_version.py"]

[tool.setuptools.packages.find]
where = ["src"]

[tool.setuptools.dynamic]
version = {attr = "skmatter.__version__"}

[tool.setuptools_scm]
version_file = "src/skmatter/_version.py"

[tool.coverage.run]
branch = true
data_file = 'tests/.coverage'
Expand All @@ -75,15 +82,6 @@ include = [
[tool.coverage.xml]
output = 'tests/coverage.xml'

[tool.isort]
skip = "__init__.py"
profile = "black"
line_length = 88
indent = 4
include_trailing_comma = true
lines_after_imports = 2
known_first_party = "skmatter"

[tool.pytest.ini_options]
testpaths = ["tests"]
addopts = [
Expand All @@ -94,7 +92,7 @@ addopts = [
]

[tool.ruff]
exclude = ["docs/src/examples/"]
exclude = ["docs/src/examples/", "src/torchpme/_version.py"]
lint.ignore = [
"F401",
"E203",
Expand Down
2 changes: 1 addition & 1 deletion src/skmatter/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,4 +7,4 @@
coding guidelines to promote usability and interoperability with existing workflows.
"""

__version__ = "0.3.0"
from ._version import __version__ # noqa: F401
1 change: 0 additions & 1 deletion src/skmatter/_selection.py
Original file line number Diff line number Diff line change
Expand Up @@ -1059,7 +1059,6 @@ def __init__(
full=False,
random_state=0,
):

self.mixing = mixing
self.initialize = initialize

Expand Down
2 changes: 1 addition & 1 deletion src/skmatter/decomposition/_kernel_pcovr.py
Original file line number Diff line number Diff line change
Expand Up @@ -238,7 +238,7 @@ def _fit(self, K, Yhat, W):
_, S, Vt = self._decompose_truncated(K_tilde)
else:
raise ValueError(
"Unrecognized svd_solver='{0}'" "".format(self._fit_svd_solver)
"Unrecognized svd_solver='{0}'".format(self._fit_svd_solver)
)

U = Vt.T
Expand Down
4 changes: 2 additions & 2 deletions src/skmatter/decomposition/_pcov.py
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ def _fit_sample_space(self, X, Y, Yhat, W, compute_pty_=True):
U, S, Vt = self._decompose_truncated(Kt)
else:
raise ValueError(
"Unrecognized svd_solver='{0}'" "".format(self.fit_svd_solver_)
"Unrecognized svd_solver='{0}'".format(self.fit_svd_solver_)
)

self.singular_values_ = np.sqrt(S.copy())
Expand Down Expand Up @@ -231,7 +231,7 @@ def _decompose_full(self, mat):
if self.n_components_ == "mle":
if self.n_samples_in_ < self.n_features_in_:
raise ValueError(
"n_components='mle' is only supported " "if n_samples >= n_features"
"n_components='mle' is only supported if n_samples >= n_features"
)
elif (
not 0 <= self.n_components_ <= min(self.n_samples_in_, self.n_features_in_)
Expand Down
1 change: 0 additions & 1 deletion src/skmatter/metrics/_pairwise.py
Original file line number Diff line number Diff line change
Expand Up @@ -150,7 +150,6 @@ def pairwise_mahalanobis_distances(
def _mahalanobis(
cell: np.ndarray, X: np.ndarray, Y: np.ndarray, cov_inv: np.ndarray
):

XY = np.concatenate([x - Y for x in X])
if cell is not None:
XY -= np.round(XY / cell) * cell
Expand Down
Loading