You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Configs and recipes for our global attention-based encoder-decoder models.
2
+
3
+
Paper: [The Conformer Encoder May Reverse the Time Dimension](https://arxiv.org/abs/2410.00680)
4
+
5
+
We use [RETURNN](https://github.com/rwth-i6/returnn) based on PyTorch for training and our setups are based on [Sisyphus](https://github.com/rwth-i6/sisyphus).
6
+
7
+
### Sisyphus Setup
8
+
9
+
To prepare a sisyphus setup, you can run `prepare_sis_dir.sh <setup-dirname>`.
10
+
After that, you need to create a `__init__.py` file inside
11
+
the `config` folder and import the sis config there to run it.
12
+
Here is an example to run chunked AED experiments for librispeech:
13
+
```python
14
+
from i6_experiments.users.schmitt.experiments.exp2024_08_27_flipped_conformer import flipped_conformer_exps
15
+
16
+
17
+
defmain():
18
+
flipped_conformer_exps.py()
19
+
```
20
+
Then, to run the experiments, you just call `./sis m` inside the setup dir. This
0 commit comments