Skip to content

Commit 233f890

Browse files
committed
Merge branch 'main' of github.com:ProMeText/Aquilign
2 parents 0c99b66 + 54a0d4f commit 233f890

File tree

2,908 files changed

+235
-3360439
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

2,908 files changed

+235
-3360439
lines changed

README.md

Lines changed: 210 additions & 55 deletions
Original file line numberDiff line numberDiff line change
@@ -1,103 +1,258 @@
1-
# AQUILIGN -- Mutilingual aligner and collator
1+
# 📐 AQUILIGN – Multilingual Aligner and Collator
22

33
[![codecov](https://codecov.io/github/ProMeText/Aquilign/graph/badge.svg?token=TY5HCBOOKL)](https://codecov.io/github/ProMeText/Aquilign)
4+
[![Last Commit](https://img.shields.io/github/last-commit/ProMeText/Aquilign)](https://github.com/ProMeText/Aquilign/commits/main)
5+
[![Issues](https://img.shields.io/github/issues/ProMeText/Aquilign)](https://github.com/ProMeText/Aquilign/issues)
6+
[![Paper: CHR 2024](https://img.shields.io/badge/📄_Paper-CHR%202024-blue)](https://ceur-ws.org/Vol-3834/paper104.pdf)
7+
[![Forks](https://img.shields.io/github/forks/ProMeText/Aquilign)](https://github.com/ProMeText/Aquilign/network/members)
8+
[![Stars](https://img.shields.io/github/stars/ProMeText/Aquilign)](https://github.com/ProMeText/Aquilign/stargazers)
49

10+
💡 *How can we computationally align medieval texts written in different languages and copied over centuries — without losing their philological depth?*
511

6-
This repo contains a set of scripts to align (and soon collate) a multilingual medieval corpus. Its designers are Matthias Gille Levenson, Lucence Ing and Jean-Baptiste Camps.
12+
**AQUILIGN** is a multilingual alignment and collation engine designed for **historical and philological corpora**.
13+
It performs **clause-level alignment** of parallel texts using a combination of **regular-expression and BERT-based segmentation**, and supports multilingual workflows across medieval Romance, Latin, and Middle English texts.
714

8-
It is based on a fork of the automatic multilingual sentence aligner Bertalign.
15+
🧪 Developed by [Matthias Gille Levenson](https://github.com/matgille), [Lucence Ing](https://cv.hal.science/lucence-ing), and [Jean-Baptiste Camps](https://github.com/Jean-Baptiste-Camps).
16+
Originally presented at the *Computational Humanities Research Conference (CHR 2023)* — see [citation](https://github.com/ProMeText/Aquilign/blob/main/README.md#-citation) for full reference.
917

10-
The scripts relies on a prior phase of text segmentation at syntagm level using regular expressions or bert-based segmentation to match grammatical syntagms and produce a more precise alignment.
1118

12-
## Installation
19+
---
1320

14-
**Caveat**: the code is being tested on Python 3.9 and 3.10 due to some libraries limitations.
21+
## 💡 Key Features
1522

16-
`pip3 install -r requirements.txt`
23+
- 🔀 **Multilingual clause-level alignment** using contextual embeddings
24+
- ✂️ **Trainable segmentation module** (BERT-based or regex-based)
25+
- 🧩 **Collation-ready architecture** (stemmatology support in development)
26+
- 📚 Optimized for **premodern and historical corpora**
1727

28+
AQUILIGN builds on a fork of [Bertalign](https://github.com/bfsujason/bertalign), customized for historical languages and alignment evaluation.
1829

19-
## Training the segmenter
30+
---
2031

21-
The segmenter we use is based on a Bert AutoModelForTokenClassification that is trainable.
32+
## ⚙️ Installation
2233

23-
Example of use:
34+
> ⚠️ **Caveat**: AQUILIGN is currently tested on **Python 3.9 and 3.10** due to certain library constraints.
35+
> Compatibility with other versions is not guaranteed.
2436
25-
`python3 train_tokenizer.py -m google-bert/bert-base-multilingual-cased -t ../Multilingual_Aegidius/data/segmentation_data/split/multilingual/train.json -d ../Multilingual_Aegidius/data/segmentation_data/split/multilingual/dev.json -e ../Multilingual_Aegidius/data/segmentation_data/split/multilingual/test.json -ep 100 -b 128 --device cuda:0 -bf16 -n multilingual_model -s 2 -es 10`
37+
```bash
38+
git clone https://github.com/ProMeText/Aquilign.git
39+
cd Aquilign
40+
pip install -r requirements.txt
41+
```
42+
## 🧠 Training the Segmenter
43+
44+
The segmenter is based on a trainable `BertForTokenClassification` model from Hugging Face’s `transformers` library.
45+
46+
We fine-tune this model to detect custom sentence delimiters (`£`) in historical texts from the **[Multilingual Segmentation Dataset](https://github.com/carolisteia/multilingual-segmentation-dataset)**.
47+
48+
---
49+
50+
### 🔧 Example Command
51+
52+
```bash
53+
python3 train_tokenizer.py \
54+
-m google-bert/bert-base-multilingual-cased \
55+
-t multilingual-segmentation-dataset/data/Multilingual_Aegidius/segmented/split/multilingual/train.json \
56+
-d multilingual-segmentation-dataset/data/Multilingual_Aegidius/segmented/split/multilingual/dev.json \
57+
-e multilingual-segmentation-dataset/data/Multilingual_Aegidius/segmented/split/multilingual/test.json \
58+
-ep 100 \
59+
-b 128 \
60+
--device cuda:0 \
61+
-bf16 \
62+
-n multilingual_model \
63+
-s 2 \
64+
-es 10
65+
```
66+
This command fine-tunes the `bert-base-multilingual-cased` model with the following configuration:
2667

27-
For finetuning a multilingual model from the `bert-base-multilingual-cased` model, on 100 epochs, a batch size of 128,
28-
on the GPU, using bf16 mixed precision, saving the model every two epochs and with and early stopping value of 10.
68+
- **Epochs**: `100`
69+
- **Batch size**: `128`
70+
- **Device**: `cuda:0` (GPU)
71+
- **Precision**: `bf16` (bfloat16 mixed precision)
72+
- **Checkpointing**: Saves the model every 2 epochs
73+
- **Early stopping**: Stops after 10 epochs without improvement
2974

30-
The training data must follow the following structure and will be validated against a specific JSON schema.
75+
---
3176

32-
```JSON
33-
{"metadata":
34-
{
77+
### 🗂️ Input Format: JSON Schema
78+
79+
Training data must follow a structured JSON format, including both metadata and examples.
80+
81+
```json
82+
{
83+
"metadata": {
3584
"lang": ["la", "it", "es", "fr", "en", "ca", "pt"],
36-
"centuries": [13, 14, 15, 16], "delimiter": "£"
85+
"centuries": [13, 14, 15, 16],
86+
"delimiter": "£"
3787
},
38-
"examples":
39-
[
40-
{"example": "que mi padre me diese £por muger a un su fijo del Rey",
41-
"lang": "es"},
42-
{"example": "Per fé, disse Lion, £i v’andasse volentieri, £ma i vo veggio £qui",
43-
"lang": "it"}
44-
]
88+
"examples": [
89+
{
90+
"example": "que mi padre me diese £por muger a un su fijo del Rey",
91+
"lang": "es"
92+
},
93+
{
94+
"example": "Per fé, disse Lion, £i v’andasse volentieri, £ma i vo veggio £qui",
95+
"lang": "it"
96+
}
97+
]
4598
}
4699
```
47-
The metadata is used for describing the corpus and will be parsed in search for the delimiter. It is the only mandatory
48-
information.
100+
- The `metadata` block must include:
101+
102+
- `"lang"`: a list of ISO 639-1 codes representing the languages in the dataset
103+
- `"centuries"`: historical coverage of the examples (used for metadata and possible filtering)
104+
- `"delimiter"`: the segmentation marker token (default: `£`), predicted by the model
49105

50-
We recommend using the ISO codes for the target languages.
51-
The codes must match the language codes that are in the [`aquilign/preproc/delimiters.json`](aquilign/preproc/delimiters.json) file, used for the
52-
regexp tokenization that can be used as a baseline.
106+
- The `examples` block is an array of training samples, each containing:
53107

54-
## Use of the aligner
108+
- `"example"`: a string of text including segmentation markers
109+
- `"lang"`: the ISO code of the language the text belongs to
55110

56-
`python3 main.py -o lancelot -i data/extraitsLancelot/ii-48/ -mw data/extraitsLancelot/ii-48/fr/micha-ii-48.txt -d
57-
cuda:0 -t bert-based` to perform alignment with our bert-based segmenter, choosing Micha edition as base witness,
58-
on the GPU. The results will be saved in `result_dir/lancelot`
111+
---
59112

60-
`python3 main.py --help` to print help.
113+
📖 For more details, see the full documentation:
114+
➡️ [segmentation_model.md](https://github.com/carolisteia/multilingual-segmentation-dataset/blob/main/docs/segmentation_model.md)
61115

62-
Files must be sorted by language, using the ISO_639-1 language code as parent directory name (`es`, `fr`, `it`, `en`, etc).
63-
## Citation
64116

65-
Gille Levenson, M., Ing, L., & Camps, J.-B. (2024). Textual Transmission without Borders: Multiple Multilingual Alignment and Stemmatology of the ``Lancelot en prose’’ (Medieval French, Castilian, Italian). In W. Haverals, M. Koolen, & L. Thompson (Eds.), Proceedings of the Computational Humanities Research Conference 2024 (Vol. 3834, pp. 65–92). CEUR. https://ceur-ws.org/Vol-3834/#paper104
117+
## 🧮 Using the Aligner
66118

119+
To align a set of parallel texts using the BERT-based segmenter, run:
67120

121+
```bash
122+
python3 main.py \
123+
-o lancelot \
124+
-i data/extraitsLancelot/ii-48/ \
125+
-mw data/extraitsLancelot/ii-48/fr/micha-ii-48.txt \
126+
-d cuda:0 \
127+
-t bert-based
68128
```
129+
This will:
130+
131+
- ✅ Align the multilingual files found in `data/extraitsLancelot/ii-48/`
132+
- 📚 Use the **Micha edition** (French) as the **base witness**
133+
- ⚙️ Run on the **GPU** (`cuda:0`)
134+
- 💾 Save results to: `result_dir/lancelot/`
135+
136+
137+
> 📂 Files must be sorted by language, using the ISO 639-1 language code
138+
> as the **parent directory name** (`es/`, `fr/`, `it/`, `en/`, etc.).
139+
140+
To view all available options:
141+
142+
```bash
143+
python3 main.py --help
144+
```
145+
146+
---
147+
## 🔗 Related Projects
148+
149+
**Aquilign** is part of a broader ecosystem of tools and corpora developed for the computational study of medieval multilingual textual traditions. The following repositories provide aligned datasets, segmentation resources, and use cases for the Aquilign pipeline:
150+
151+
- [Multilingual Segmentation Dataset](https://github.com/carolisteia/multilingual-segmentation-dataset)
152+
Sentence and clause-level segmentation datasets in seven medieval languages, used to train and evaluate the segmentation model integrated into Aquilign.
153+
154+
- [Parallelium – an aligned scriptures dataset](https://github.com/carolisteia/parallelium-scriptures-alignment-dataset)
155+
A multilingual dataset of aligned Biblical and Qur’anic texts (medieval and modern), used for benchmarking multilingual alignment in diverse historical settings.
156+
157+
- [Lancelot par maints langages](https://github.com/carolisteia/lancelot-par-maints-langages)
158+
A parallel corpus of *Lancelot en prose* in French, Castilian, and Italian. First testbed for Aquilign’s multilingual alignment and stemmatological comparison.
159+
160+
- [Multilingual Aegidius](https://github.com/ProMeText/Multilingual_Aegidius)
161+
A corpus of *De regimine principum* and its translations in Latin, Romance vernaculars, and Middle English. Built using the Aquilign segmentation and alignment workflow.
162+
163+
---
164+
165+
## 🚧 Project Status & Future Directions
166+
167+
**Aquilign** is under active development and currently supports:
168+
169+
- ✅ Sentence- and clause-level alignment across multiple languages
170+
- ✅ Integration with BERT-based and regex-based segmenters
171+
- ✅ Alignment evaluation and output export in tabular format
172+
- ✅ Compatibility with multilingual historical corpora (e.g. *Lancelot*, *De Regimine Principum*)
173+
174+
---
175+
176+
### 🔮 Planned Features
177+
178+
- 🧬 **Collation Module**:
179+
Automatic generation of collation tables across aligned witnesses for textual variant analysis
180+
181+
- 🏛️ **Stemmatic Analysis Integration**:
182+
Tools for stemmatological inference based on alignment structure and textual divergence
183+
184+
- 📊 **Interactive Visualization Tools**:
185+
Visualization of alignment, variant graphs, and stemma hypotheses
186+
187+
- 🌐 **Support for Additional Languages**:
188+
Extending tokenization and alignment capabilities to new premodern languages and scripts
189+
190+
---
191+
192+
If you're interested in contributing to any of these areas or proposing enhancements, see [Contact & Contributions](#-contact--contributions).
193+
194+
---
195+
196+
## 📫 Contact & Contributions
197+
198+
We welcome questions, feedback, and contributions to improve the Aquilign pipeline.
199+
200+
- 🛠️ Found a bug or have a feature request?
201+
➡️ [Open an issue](https://github.com/ProMeText/Aquilign/issues)
202+
203+
- 🔄 Want to contribute code or improvements?
204+
➡️ Fork the repo and submit a pull request
205+
206+
- 🎓 For academic collaboration or project inquiries:
207+
➡️ Reach out via [GitHub Discussions](https://github.com/ProMeText/Aquilign/discussions) or contact the authors directly
208+
209+
---
210+
## 📚 Citation
211+
212+
If you use this tool in your research, please cite:
213+
214+
Gille Levenson, M., Ing, L., & Camps, J.-B. (2024).
215+
**Textual Transmission without Borders: Multiple Multilingual Alignment and Stemmatology of the _Lancelot en prose_ (Medieval French, Castilian, Italian).**
216+
In W. Haverals, M. Koolen, & L. Thompson (Eds.), *Proceedings of the Computational Humanities Research Conference 2024* (Vol. 3834, pp. 65–92). CEUR.
217+
🔗 [https://ceur-ws.org/Vol-3834/#paper104](https://ceur-ws.org/Vol-3834/#paper104)
218+
219+
### 📄 BibTeX
220+
221+
```bibtex
69222
@inproceedings{gillelevenson_TextualTransmissionBorders_2024a,
70-
title = {Textual {{Transmission}} without {{Borders}}: {{Multiple Multilingual Alignment}} and {{Stemmatology}} of the ``{{Lancelot}} En Prose'' ({{Medieval French}}, {{Castilian}}, {{Italian}})},
71-
shorttitle = {Textual {{Transmission}} without {{Borders}}},
72-
booktitle = {Proceedings of the {{Computational Humanities}} {{Research Conference}} 2024},
223+
title = {Textual Transmission without Borders: Multiple Multilingual Alignment and Stemmatology of the ``Lancelot En Prose'' (Medieval French, Castilian, Italian)},
224+
shorttitle = {Textual Transmission without Borders},
225+
booktitle = {Proceedings of the Computational Humanities Research Conference 2024},
73226
author = {Gille Levenson, Matthias and Ing, Lucence and Camps, Jean-Baptiste},
74227
editor = {Haverals, Wouter and Koolen, Marijn and Thompson, Laure},
75228
date = {2024},
76-
series = {{{CEUR Workshop Proceedings}}},
229+
series = {CEUR Workshop Proceedings},
77230
volume = {3834},
78231
pages = {65--92},
79232
publisher = {CEUR},
80233
location = {Aarhus, Denmark},
81234
issn = {1613-0073},
82235
url = {https://ceur-ws.org/Vol-3834/#paper104},
83236
urldate = {2024-12-09},
84-
eventtitle = {Computational {{Humanities Research}} 2024},
85-
langid = {english},
86-
file = {/home/mgl/Bureau/Travail/Bibliotheque_zoteros/storage/CIH7IAHV/Levenson et al. - 2024 - Textual Transmission without Borders Multiple Multilingual Alignment and Stemmatology of the ``Lanc.pdf}
237+
eventtitle = {Computational Humanities Research 2024},
238+
langid = {english}
87239
}
88-
89240
```
241+
---
242+
## 💰 Funding
90243

244+
This work benefited from national funding managed by the **Agence Nationale de la Recherche**
245+
under the *Investissements d'avenir* programme with the reference:
246+
**ANR-21-ESRE-0005 (Biblissima+)**
91247

92-
## Licence
93-
94-
This fork is released under the [GNU General Public License v3.0](./LICENCE)
95-
96-
## Funding
97-
98-
This work benefited́ from national funding managed by the Agence Nationale de la Recherche under the Investissements d'avenir programme with the reference ANR-21-ESRE-0005 (Biblissima+).
248+
> Ce travail a bénéficié d'une aide de l’État gérée par l’**Agence Nationale de la Recherche**
249+
> au titre du programme d’**Investissements d’avenir**, référence **ANR-21-ESRE-0005 (Biblissima+)**.
99250
100-
Ce travail a bénéficié́ d'une aide de l’État gérée par l'Agence Nationale de la Recherche au titre du programme d’Investissements d’avenir portant la référence ANR-21-ESRE-0005 (Biblissima+)
251+
<p align="center">
252+
<img src="https://github.com/user-attachments/assets/915c871f-fbaa-45ea-8334-2bf3dde8252d" alt="Biblissima+ Logo" width="600"/>
253+
</p>
101254

102-
![image](https://github.com/user-attachments/assets/915c871f-fbaa-45ea-8334-2bf3dde8252d)
255+
## ⚖️ License
103256

257+
This project is released under the **[GNU General Public License v3.0](./LICENCE)**.
258+
You are free to use, modify, and redistribute the code under the same license conditions.

data/Geste

Lines changed: 0 additions & 1 deletion
This file was deleted.

data/Otinel/fr/transcr_Otin_A_pos.txt

Lines changed: 0 additions & 32 deletions
This file was deleted.

data/Otinel/fr/transcr_Otin_B_pos.txt

Lines changed: 0 additions & 31 deletions
This file was deleted.

0 commit comments

Comments
 (0)