|
1 | | -# Current Status |
2 | | - |
3 | | -**Note that this is an early release. Don't hesitate to report bugs/possible improvements! There are surely many.** |
4 | | - |
5 | | - |
6 | 1 | # Tibert |
7 | 2 |
|
8 | 3 | `Tibert` is a transformers-compatible reproduction from the paper [End-to-end Neural Coreference Resolution](https://aclanthology.org/D17-1018/) with several modifications. Among these: |
9 | 4 |
|
10 | 5 | - Usage of BERT (or any BERT variant) as an encoder as in [BERT for Coreference Resolution: Baselines and Analysis](https://aclanthology.org/D19-1588/) |
11 | 6 | - batch size can be greater than 1 |
12 | 7 | - Support of singletons as in [Adapted End-to-End Coreference Resolution System for Anaphoric Identities in Dialogues](https://aclanthology.org/2021.codi-sharedtask.6) |
13 | | - |
| 8 | +- Hierarchical merging as in [Coreference in Long Documents using Hierarchical Entity Merging](https://aclanthology.org/2024.latechclfl-1.2/) |
| 9 | + |
14 | 10 |
|
15 | 11 | It can be installed with `pip install tibert`. |
16 | 12 |
|
@@ -90,6 +86,41 @@ print(annotated_doc.coref_chains) |
90 | 86 | `>>>[[Mention(tokens=['The', 'princess'], start_idx=11, end_idx=13), Mention(tokens=['Princess', 'Liana'], start_idx=0, end_idx=2)], [Mention(tokens=['Zarth', 'Arn'], start_idx=6, end_idx=8)]]` |
91 | 87 |
|
92 | 88 |
|
| 89 | +## Hierarchical Merging |
| 90 | + |
| 91 | +Hierarchical merging allows to reduce RAM usage and computations when performing inference on long documents. To do so, the user provides the text cut in chunks. The model will perform prediction for chunks, which means the long document wont be taken at once into memory. Then, hierarchical merging will try to merge chunk predictions. This allow scaling to arbitrarily large documents. See [Coreference in Long Documents using Hierarchical Entity Merging](https://aclanthology.org/2024.latechclfl-1.2/) for more details. |
| 92 | + |
| 93 | +Hierarchical merging can be used as follows: |
| 94 | + |
| 95 | +```python |
| 96 | +from tibert import BertForCoreferenceResolution, predict_coref |
| 97 | +from tibert.utils import pprint_coreference_document |
| 98 | +from transformers import BertTokenizerFast |
| 99 | + |
| 100 | +model = BertForCoreferenceResolution.from_pretrained( |
| 101 | + "compnet-renard/bert-base-cased-literary-coref" |
| 102 | +) |
| 103 | +tokenizer = BertTokenizerFast.from_pretrained("bert-base-cased") |
| 104 | + |
| 105 | +chunk1 = "Princess Liana felt sad, because Zarth Arn was gone." |
| 106 | +chunk2 = "She went to sleep." |
| 107 | + |
| 108 | +annotated_doc = predict_coref( |
| 109 | + [chunk1, chunk2], model, tokenizer, hierarchical_merging=True |
| 110 | +) |
| 111 | + |
| 112 | +pprint_coreference_document(annotated_doc) |
| 113 | +``` |
| 114 | + |
| 115 | +This results in: |
| 116 | + |
| 117 | +`>>>(1 Princess Liana ) felt sad , because (0 Zarth Arn ) was gone . (1 She ) went to sleep .` |
| 118 | + |
| 119 | +Even if the mentions `Princess Liana` and `She` are not in the same chunk, hierarchical merging still resolves this case correctly. |
| 120 | + |
| 121 | +*Note that, at the time of writing, the performance of the hierarchical merging feature has not been benchmarked*. |
| 122 | + |
| 123 | + |
93 | 124 | ## Training a model |
94 | 125 |
|
95 | 126 | Aside from the `tibert.train.train_coref_model` function, it is possible to train a model from the command line. Training a model requires installing the `sacred` library. Here is the most basic example: |
|
0 commit comments