|
1 | | -<p align = "left"> |
2 | | - <img src="logo.svg" alt="SVG Image" style="width:50%;"/> |
| 1 | +<p align="center"> |
| 2 | + <picture> |
| 3 | + <source media="(prefers-color-scheme: dark)" srcset="logo-dark.png"> |
| 4 | + <source media="(prefers-color-scheme: light)" srcset="logo-light.png"> |
| 5 | + <img alt="kooplearn logo" width="60%" src="logo-light.png"> |
| 6 | + </picture> |
3 | 7 | </p> |
4 | 8 |
|
5 | | -# Learn Koopman and Transfer operators for Dynamical Systems and Stochastic Processes |
| 9 | +<a href="https://kooplearn.readthedocs.io/latest/"><img alt="Static Badge" src="https://img.shields.io/badge/Documentation-informational"></a> |
| 10 | + |
| 11 | + |
6 | 12 |
|
7 | | -`kooplearn` is a Python library designed for learning Koopman or Transfer operators associated with dynamical systems. Given a _nonlinear_ dynamical system $x_{t + 1} = S(x_{t})$, the **Koopman operator** provides a _global linearization_ of the dynamics by mapping it to a suitable space of observables $\mathcal{F}$. An observable is any (scalar) function of the state. The Koopman operator $\mathsf{K}$ is defined as $$(\mathsf{K}f)(x_{t}) = f(x_{t + 1}) := f \circ S (x_t) \qquad f \in \mathcal{F}.$$ |
8 | | -Similarly, given a stochastic process $X:= \{ X_{s} \colon s \in \mathbb{N}\}$, its **Transfer operator* returns the expected value of any observable forward in time. The Transfer operator $\mathsf{T}$ is defined as $$(\mathsf{T}f)(x) := \mathbb{E}\left[f(X_{t + 1}) \mid X_{t} = x \right] \qquad f \in \mathcal{F}.$$ |
9 | 13 |
|
10 | | -`kooplearn` provides a suite of algorithms for model training and analysis, enabling users to perform forecasting, spectral decomposition, and modal decomposition based on the learned operator. |
| 14 | +`kooplearn` is a Python library to learn evolution operators — also known as _Koopman_ or _Transfer_ operators — from data. `kooplearn` models can: |
11 | 15 |
|
12 | | -Please note that `kooplearn` is currently under active development, and while we are continuously adding new features and improvements, some parts of the library might still be a work in progress. |
| 16 | +1. Predict the evolution of states *and* observables. |
| 17 | +2. Estimate the eigenvalues and eigenfunctions of the learned evolution operators. |
| 18 | +3. Compute the [dynamic mode decomposition](https://en.wikipedia.org/wiki/Dynamic_mode_decomposition) of states *and* observables. |
| 19 | +4. Learn neural-network representations $x_t \mapsto \varphi(x_t)$ for evolution operators. |
13 | 20 |
|
14 | | -## Features |
| 21 | +## Why Choosing `kooplearn`? |
| 22 | + |
| 23 | +1. It is easy to use and strictly adheres to the [scikit-learn API](https://scikit-learn.org/stable/api/index.html). |
| 24 | +2. **Kernel estimators** are state-of-the-art: |
| 25 | + |
| 26 | + * `kooplearn` implements the *Reduced Rank Regressor* from [Kostic et al. 2022](https://arxiv.org/abs/2205.14027), which is [provably better](https://arxiv.org/abs/2302.02004) than the classical [kernel DMD](https://arxiv.org/abs/1411.2260) in estimating eigenvalues and eigenfunctions. |
| 27 | + * It also implements [Nyström estimators](https://arxiv.org/abs/2306.04520) and randomized estimators [randomized](https://arxiv.org/abs/2312.17348) for blazingly fast kernel learning. |
| 28 | +3. Includes representation-learning losses (implemented both in Pytorch and JAX) to train neural-network Koopman embeddings. |
| 29 | +4. Offers a collection of datasets for benchmarking evolution-operator learning algorithms. |
15 | 30 |
|
16 | | -- Implement different algorithms to learn Koopman or transfer operators for dynamical systems. |
17 | | -- Perform forecasting using the learned operators. |
18 | | -- Conduct spectral decomposition of the learned operator. |
19 | | -- Perform modal decomposition for further analysis. |
20 | | - |
21 | 31 | ## Installation |
22 | | -To install the core version of `kooplearn`, without optional dependencies, run |
| 32 | + |
| 33 | +To install the core version of `kooplearn`: |
| 34 | + |
| 35 | +### **pip** |
| 36 | + |
23 | 37 | ```bash |
24 | | - pip install kooplearn |
| 38 | +pip install kooplearn |
25 | 39 | ``` |
26 | | -To install the full version of `kooplearn`, including Neural-Network models, and the dahsboard, run |
| 40 | + |
| 41 | +### **uv** |
| 42 | + |
27 | 43 | ```bash |
28 | | - pip install "kooplearn[full]" |
| 44 | +uv add kooplearn |
29 | 45 | ``` |
30 | | -To install the development version of `kooplearn`, run |
| 46 | + |
| 47 | +To enable neural-network representations using `kooplearn.torch` or `kooplearn.jax`: |
| 48 | + |
| 49 | +### **pip** |
| 50 | + |
31 | 51 | ```bash |
32 | | - pip install --upgrade git+https://github.com/Machine-Learning-Dynamical-Systems/kooplearn.git |
| 52 | +# Torch |
| 53 | +pip install "kooplearn[torch]" |
| 54 | +# JAX |
| 55 | +pip install "kooplearn[jax]" |
33 | 56 | ``` |
| 57 | + |
| 58 | +### **uv** |
| 59 | + |
| 60 | +```bash |
| 61 | +# Torch |
| 62 | +uv add "kooplearn[torch]" |
| 63 | +# JAX |
| 64 | +uv add "kooplearn[jax]" |
| 65 | +``` |
| 66 | + |
34 | 67 | ## Contributing |
35 | 68 |
|
36 | 69 | We welcome contributions from the community! If you're interested in contributing to `kooplearn`, please follow these steps: |
|
0 commit comments