You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This repo contains a Bi-direction Long Short Term Memory (BiLSTM) Deep Neural Network with a stacked Conditional Random Field (CRF) for identifying references from text. The model itself is based on the work of Rodrigues et al. (2018), although the implemention here differs significantly.
5
+
Deep Reference Parser is a Bi-direction Long Short Term Memory (BiLSTM) Deep Neural Network with a stacked Conditional Random Field (CRF) for identifying references from text. It is designed to be used in the [Reach](https://github.com/wellcometrust/reach) tool to replace a number of existing machine learning models which find references, and extract the constituent parts (e.g. author, year, publication, volume, etc).
6
+
7
+
The intention for this project, like Rodrigues et al. (2018) is to implement a MultiTask model which will complete three tasks simultaneously: reference span detection, reference component detection, and reference type classification.
8
+
9
+
### Current status:
10
+
11
+
|Component|Individual|MultiTask|
12
+
|---|---|---|
13
+
|Spans|✔️ Implemented|❌ Not Implemented|
14
+
|Components|❌ Not Implemented|❌ Not Implemented|
15
+
|Type|❌ Not Implemented|❌ Not Implemented|
16
+
17
+
### The model
18
+
19
+
The model itself is based on the work of [Rodrigues et al. (2018)](https://github.com/dhlab-epfl/LinkedBooksDeepReferenceParsing), although the implemention here differs significantly. The main differences are:
20
+
21
+
* We use a combination of the training data used by Rodrigues, et al. (2018) in addition to data that we have labelled ourselves. No Rodrigues et al. data are included in the test and validation sets.
22
+
* We also use a new word embedding that has been trained on documents relevant to the medicine.
23
+
* Whereas Rodrigues at al. split documents on lines, and sent the lines to the model, we combine the lines of the document together, and then send larger chunks to the model, giving it more context to work with when training and predicting.
24
+
* Whilst the model makes predictions at the token level, it outputs references by naively splitting on these tokens ([source](https://github.com/wellcometrust/deep_reference_parser/blob/master/deep_reference_parser/tokens_to_references.py)).
25
+
* Hyperparameters are passed to the model in a config (.ini) file. This is to keep track of experiments, but also because it is difficult to save the model with the CRF architecture, so it is necesary to rebuild (not re-train!) the model object each time you want to use it. Storing the hyperparameters in a config file makes this easier.
26
+
* The package ships with a [config file](https://github.com/wellcometrust/deep_reference_parser/blob/master/deep_reference_parser/configs/2019.12.0.ini) which defines the latest, highest performing model. The config file defines where to find the various objects required to build the model (dictionaries, weights, embeddings), and will automatically fetch them when run, if they are not found locally.
27
+
* The model includes a command line interface inspired by [SpaCy](https://github.com/explosion/spaCy); functions can be called from the command line with `python -m deep_reference_parser` ([source](https://github.com/wellcometrust/deep_reference_parser/blob/master/deep_reference_parser/predict.py)).
28
+
* Python version updated to 3.7, along with dependencies (although more to do)
4
29
5
-
##Just show me the references!!!
30
+
### Performance
6
31
7
-
If you want to try out the model the quick way, there is a pre-packaged wheel containing the latest word embedding and weights available on S3. The following commands will get you started:
32
+
#### Span detection
33
+
34
+
|token|f1|support|
35
+
|---|---|---|
36
+
|b-r|0.9364|2472|
37
+
|e-r|0.9312|2424|
38
+
|i-r|0.9833|92398|
39
+
|o|0.9561|32666|
40
+
|weighted avg|0.9746|129959|
41
+
42
+
#### Computing requirements
43
+
44
+
Models are trained on AWS instances using CPU only.
The package uses config files to store hyperparameters for the models.
31
86
32
-
To train your own models you will need to define the model hyperparameters in a config file.
87
+
A [config file](https://github.com/wellcometrust/deep_reference_parser/blob/master/deep_reference_parser/configs/2019.12.0.ini) which describes the parameters of the best performing model ships with the package:
To get a list of the available commands run `python -m deep_reference_parser`
130
+
131
+
```
132
+
$ python -m deep_reference_parser
133
+
Using TensorFlow backend.
134
+
135
+
ℹ Available commands
136
+
train, predict
137
+
```
138
+
139
+
For additional help, you can pass a command with the `-h`/`--help` flag:
140
+
141
+
```
142
+
$ python -m deep_reference_parser predict --help
143
+
Using TensorFlow backend.
144
+
usage: deep_reference_parser predict [-h]
145
+
[-c]
146
+
[-t] [-v]
147
+
text
148
+
149
+
positional arguments:
150
+
text Plaintext from which to extract references
151
+
152
+
optional arguments:
153
+
-h, --help show this help message and exit
154
+
-c --config-file Path to config file
155
+
-t, --tokens Output tokens instead of complete references
156
+
-v, --verbose Output more verbose results
157
+
158
+
```
159
+
160
+
### Training your own models
161
+
162
+
To train your own models you will need to define the model hyperparameters in a config file like the one above. The config file is then passed to the train command as the only argument. Note that the `output_path` defined in the config file will be created if it doesn not already exist.
33
163
34
164
```
35
165
python -m deep_reference_parser train test.ini
36
166
```
37
167
168
+
Data must be prepared in the following tab separated format (tsv). We may publish further tools in the future to assist in the preparation of data following annotation. In this case the data the data for reference span ddetection follows an IOBE schema.
169
+
170
+
You must provide the train/test/validation data splits in this format in pre-prepared files that are defined in the config file.
171
+
172
+
```
173
+
References o
174
+
1 o
175
+
The b-r
176
+
potency i-r
177
+
of i-r
178
+
history i-r
179
+
was i-r
180
+
on i-r
181
+
display i-r
182
+
at i-r
183
+
a i-r
184
+
workshop i-r
185
+
held i-r
186
+
in i-r
187
+
February i-r
188
+
```
189
+
190
+
### Making predictions
191
+
192
+
If you wish to use the latest model that we have trained, you can simply run:
If you wish to use a custom model that you have trained, you must specify the config file which defines the hyperparameters for that model using the `-c` flag:
0 commit comments