Skip to content

Commit f7fd622

Browse files
Merge pull request #3 from MSD-IRIMAS/aif/readme
[DOC] Readme
2 parents 5b47b2e + 47ebecc commit f7fd622

File tree

3 files changed

+106
-7
lines changed

3 files changed

+106
-7
lines changed

README.md

Lines changed: 101 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,101 @@
1-
# Data-Augmentation-4-TSC
2-
Re-framing Time Series Augmentation Through the Lens of Generative Models
1+
> ⚠️ **Alert:** If you are using this code with **Keras v3**, make sure you are using **Keras ≥ 3.6.0**.
2+
> Earlier versions of Keras v3 do not honor `trainable=False`, which will result in **training hand-crafted filters** in **LITEMV** unexpectedly.
3+
4+
# Re-framing Time Series Augmentation Through the Lens of Generative Models
5+
6+
Authors: [Ali Ismail-Fawaz](https://hadifawaz1999.github.io/)<sup>1</sup>, [Maxime Devanne](https://maxime-devanne.com/)<sup>1</sup>, [Stefano Berreti](https://www.micc.unifi.it/berretti/)<sup>2</sup>, [Jonathan Weber](https://www.jonathan-weber.eu/)<sup>1</sup> and [Germain Forestier](https://germain-forestier.info/)<sup>1,3</sup>
7+
8+
<sup>1</sup> IRIMAS, Universite de Haute-Alsace, France<br>
9+
<sup>2</sup> MICC, University of Florence, Italy<br>
10+
<sup>3</sup> DSAI, Monash University, Australia
11+
12+
This repository is the source code of the article titled "[Re-framing Time Series Augmentation Through the Lens of Generative Models](#)" accepted in the [10th Workshop on Advanced Analytics and Learning on Temporal Data (AALTD 2025)](https://ecml-aaltd.github.io/aaltd2025/) in conjunction with the [2025 European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD 2025)](https://ecmlpkdd.org/2025/).
13+
In this article, we present a benchmark comparison between 22 data augmentation techniques on 131 time series classification datasets of the [UCR archive](https://www.cs.ucr.edu/%7Eeamonn/time_series_data_2018/).
14+
15+
<img id="img-overview" src="https://raw.githubusercontent.com/MSD-IRIMAS/Data-Augmentation-4-TSC/main/static/summary-methods.png" class="interpolation-image" style="width: 100%; height: 100%; border: none;"> </img>
16+
17+
## Abstract
18+
19+
Time series classification is widely used in many fields, but it often suffers from a lack of labeled data. To address this, researchers commonly apply data augmentation techniques that generate synthetic samples through transformations such as jittering, warping, or resampling. However, with an increasing number of available augmentation methods, it becomes difficult to choose the most suitable one for a given task. In many cases, this choice is based on intuition or visual inspection. Assessing the impact of this choice on classification accuracy requires training models, which is time-consuming and depends on the dataset. In this work, we adopt a generative model perspective and evaluate augmentation methods prior to training any classifier, using metrics that quantify both fidelity and diversity of the generated samples. We benchmark 22 augmentation techniques on 131 public datasets using eight metrics. Our results provide a practical and efficient way to compare augmentation methods without relying solely on classifier performance.
20+
21+
## Data
22+
23+
In this work we utilize 131 datasets of the UCR archive taken from the [original repository](https://www.cs.ucr.edu/%7Eeamonn/time_series_data_2018/) and the [new added datasets](https://link.springer.com/content/pdf/10.1007/s10618-024-01022-1.pdf).
24+
25+
However you are not obligated to download them as our code loads the datasets through the [Time Series Classification webpage](https://timeseriesclassification.com/) using [aeon-toolkit](https://aeon-toolkit.org/).
26+
27+
## Docker
28+
29+
This repository supports the usage of docker. In order to create the docker image using the [dockerfile](dockerfile), simply run the following command (assuming you have docker installed and nvidia cuda container as well):
30+
```bash
31+
docker build --build-arg USER_ID=$(id -u) --build-arg GROUP_ID=$(id -g) -t data-augmentation-review-image .
32+
```
33+
After the image has been successfully built, you can create the docker container using the following command:
34+
```bash
35+
docker run --gpus all -it --name data-augmentation-review-container -v "$(pwd):/home/myuser/code" --user $(id -u):$(id -g) data-augmentation-review-image bash
36+
```
37+
38+
The code will be stored under the directory `/home/myuser/code/` inside the docker container. This will allow you to use GPU acceleration.
39+
40+
## Requirements
41+
42+
If you do not want to use docker, simply install the project using the following command:
43+
```bash
44+
python3 -m venv ./data-augmentation-review-venv
45+
source ./data-augmentation-review-venv/bin/activate
46+
pip install --upgrade pip
47+
pip install -e .[dev]
48+
```
49+
50+
Make sure you have [`jq`](https://jqlang.org/) installed on your system. This project supports `python>=3.10` only.
51+
52+
You can see the list of dependencies and their required version in the [pyptoject.toml](pyproject.toml) file.
53+
54+
55+
## Running the code on a single experiment
56+
57+
If you wish to run a single experiment on a single dataset, using a single augmentation method, using a single model then first you have to execute your docker container to open a terminal inside if you're not inside the container:
58+
```bash
59+
docker exec -it data-augmentation-review-container bash
60+
```
61+
Then you can run the following command for example to run Amplitude Warping on the Adiac dataset:
62+
```bash
63+
python3 main.py task=generate_data dataset_name=Adiac generate_data.method=AW
64+
```
65+
The code uses [hydra](https://hydra.cc/docs/intro/) for the parameter configuration, simply see the [hydra configuration file](config/config_hydra.yaml) for a detailed view on the parameters of our experiments.
66+
67+
## Running the whole benchmark
68+
69+
If you wish to run all the experiments to reproduce the results of our article simply run the following for data generation experiments:
70+
```bash
71+
chmod +x run_generate_data.sh
72+
nohup ./run_generate_data.sh &
73+
```
74+
and the following for training the feature extractor:
75+
```bash
76+
chmod +x run_train_feature_extractor.sh
77+
nohup ./run_train_feature_extractor.sh &
78+
```
79+
and the following for evaluation of the generations:
80+
```bash
81+
chmod +x run_evaluate_generation.sh
82+
nohup ./run_evaluate_generation.sh &
83+
```
84+
85+
## Cite this work
86+
87+
If you use this work please cite the following:
88+
```bibtex
89+
@inproceedings{ismail-fawaz2025Data-Aug-4-TSC,
90+
author = {Ismail-Fawaz, Ali and Devanne, Maxime and Berretti, Sefano and Weber, Jonathan and Forestier, Germain},
91+
title = {Re-framing Time Series Augmentation Through the Lens of Generative Models},
92+
booktitle = {ECML/PKDD Workshop on Advanced Analytics and Learning on Temporal Data},
93+
city = {Porto},
94+
country = {Portugal},
95+
year = {2025}
96+
}
97+
```
98+
99+
## Acknowledgments
100+
101+
This work was supported by the ANR DELEGATION project (grant ANR-21-CE23-0014) of the French Agence Nationale de la Recherche. The authors would like to acknowledge the High Performance Computing Center of the University of Strasbourg for supporting this work by providing scientific sup- port and access to computing resources. Part of the computing resources were funded by the Equipex Equip@Meso project (Programme Investissements d’Avenir) and the CPER Alsacalcul/Big Data. The authors would also like to thank the creators and providers of the UCR Archive

config/config_hydra.yaml

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -11,12 +11,12 @@ dataset_name: Adiac
1111

1212
train_feature_extractor:
1313
estimator: LITE
14-
n_epochs: 2
14+
n_epochs: 1500
1515
batch_size: 64
16-
runs: 2
16+
runs: 5
1717

1818
generate_data:
19-
n_generations: 2
19+
n_generations: 5
2020
method: AW
2121

2222
## for Scaling no parameters
@@ -41,7 +41,7 @@ generate_data:
4141

4242
evaluate_generation:
4343
method: AW
44-
n_generations: 2
44+
n_generations: 5
4545
## for RGW (Random Guided Warping), DGW (Discriminative Guided Warping) and WBA (Weighted Barycenter Averaging)
4646
distance: "msm" ## possibilities: any aeon distance
4747

@@ -60,4 +60,4 @@ evaluate_generation:
6060

6161
feature_extractor:
6262
estimator: LITE
63-
runs: 2
63+
runs: 5

static/summary-methods.png

186 KB
Loading

0 commit comments

Comments
 (0)