|
| 1 | + |
| 2 | +<h1 align="center">multi-task-NLP</h1> |
| 3 | +<p align="center"> |
| 4 | + <a href='https://multi-task-nlp.readthedocs.io/en/latest/?badge=latest'> |
| 5 | + <img src='https://readthedocs.org/projects/multi-task-nlp/badge/?version=latest' alt='Documentation Status' /> |
| 6 | + </a> |
| 7 | + <a href="https://github.com/hellohaptik/multi-task-NLP/blob/master/LICENSE"> |
| 8 | + <img src="https://img.shields.io/github/license/hellohaptik/multi-task-NLP"> |
| 9 | + </a> |
| 10 | + <a href="https://github.com/hellohaptik/multi-task-NLP/graphs/contributors"> |
| 11 | + <img src="https://img.shields.io/badge/contributors-3-yellow"> |
| 12 | + </a> |
| 13 | + <a href="https://github.com/hellohaptik/multi-task-NLP/issues"> |
| 14 | + <img src="https://img.shields.io/github/issues/hellohaptik/multi-task-NLP?color=orange"> |
| 15 | + </a> |
| 16 | +</p> |
| 17 | + |
| 18 | +<p align="center"> |
| 19 | + <img src="docs/source/multi_task.png" width="500", height="550"> |
| 20 | +</p> |
| 21 | + |
| 22 | +multi_task_NLP is a utility toolkit enabling NLP developers to easily train and infer a single model for multiple tasks. |
| 23 | +We support various data formats for majority of NLI tasks and multiple transformer-based encoders (eg. BERT, Distil-BERT, ALBERT, RoBERTa, XLNET etc.) |
| 24 | + |
| 25 | +For complete documentation for this library, please refer to [documentation](https://multi-task-nlp.readthedocs.io/en/latest/) |
| 26 | + |
| 27 | +## What is multi_task_NLP about? |
| 28 | + |
| 29 | +Any conversational AI system involves building multiple components to perform various tasks and a pipeline to stitch all components together. |
| 30 | +Provided the recent effectiveness of transformer-based models in NLP, it’s very common to build a transformer-based model to solve your use case. |
| 31 | +But having multiple such models running together for a conversational AI system can lead to expensive resource consumption, increased latencies for predictions and make the system difficult to manage. |
| 32 | +This poses a real challenge for anyone who wants to build a conversational AI system in a simplistic way. |
| 33 | + |
| 34 | +multi_task_NLP gives you the capability to define multiple tasks together and train a single model which simultaneously learns on all defined tasks. |
| 35 | +This means one can perform multiple tasks with latency and resource consumption equivalent to a single task. |
| 36 | + |
| 37 | +## Installation |
| 38 | + |
| 39 | +To use multi-task-NLP, you can clone the repository into the desired location on your system |
| 40 | +with the following terminal command. |
| 41 | + |
| 42 | +```console |
| 43 | +$ cd /desired/location/ |
| 44 | +$ git clone https://github.com/hellohaptik/multi-task-NLP.git |
| 45 | +$ cd multi-task-NLP |
| 46 | +$ pip install -r requirements.txt |
| 47 | +``` |
| 48 | + |
| 49 | +NOTE:- The library is built and tested using ``Python 3.7.3``. It is recommended to install the requirements in a virtual environment. |
| 50 | + |
| 51 | +## Quickstart Guide |
| 52 | + |
| 53 | +A quick guide to show how a single model can be trained for multiple NLI tasks in just 3 simple steps |
| 54 | +and with **no requirement to code!!** |
| 55 | + |
| 56 | +Follow these 3 simple steps to train your multi-task model! |
| 57 | + |
| 58 | +### Step 1 - Define your task file |
| 59 | + |
| 60 | +Task file is a YAML format file where you can add all your tasks for which you want to train a multi-task model. |
| 61 | + |
| 62 | +```yaml |
| 63 | + |
| 64 | +TaskA: |
| 65 | + model_type: BERT |
| 66 | + config_name: bert-base-uncased |
| 67 | + dropout_prob: 0.05 |
| 68 | + label_map_or_file: |
| 69 | + - label1 |
| 70 | + - label2 |
| 71 | + - label3 |
| 72 | + metrics: |
| 73 | + - accuracy |
| 74 | + loss_type: CrossEntropyLoss |
| 75 | + task_type: SingleSenClassification |
| 76 | + file_names: |
| 77 | + - taskA_train.tsv |
| 78 | + - taskA_dev.tsv |
| 79 | + - taskA_test.tsv |
| 80 | + |
| 81 | +TaskB: |
| 82 | + model_type: BERT |
| 83 | + config_name: bert-base-uncased |
| 84 | + dropout_prob: 0.3 |
| 85 | + label_map_or_file: data/taskB_train_label_map.joblib |
| 86 | + metrics: |
| 87 | + - seq_f1 |
| 88 | + - seq_precision |
| 89 | + - seq_recall |
| 90 | + loss_type: NERLoss |
| 91 | + task_type: NER |
| 92 | + file_names: |
| 93 | + - taskB_train.tsv |
| 94 | + - taskB_dev.tsv |
| 95 | + - taskB_test.tsv |
| 96 | +``` |
| 97 | +For knowing about the task file parameters to make your task file, [task file parameters](https://multi-task-nlp.readthedocs.io/en/latest/define_multi_task_model.html#task-file-parameters). |
| 98 | +
|
| 99 | +### Step 2 - Run data preparation |
| 100 | +
|
| 101 | +After defining the task file, run the following command to prepare the data. |
| 102 | +
|
| 103 | +```console |
| 104 | +$ python data_preparation.py \ |
| 105 | + --task_file 'sample_task_file.yml' \ |
| 106 | + --data_dir 'data' \ |
| 107 | + --max_seq_len 50 |
| 108 | +``` |
| 109 | + |
| 110 | +For knowing about the ``data_preparation.py`` script and its arguments, refer [running data preparation](https://multi-task-nlp.readthedocs.io/en/latest/training.html#running-data-preparation). |
| 111 | + |
| 112 | +### Step 3 - Run train |
| 113 | + |
| 114 | +Finally you can start your training using the following command. |
| 115 | + |
| 116 | +```console |
| 117 | +$ python train.py \ |
| 118 | + --data_dir 'data/bert-base-uncased_prepared_data' \ |
| 119 | + --task_file 'sample_task_file.yml' \ |
| 120 | + --out_dir 'sample_out' \ |
| 121 | + --epochs 5 \ |
| 122 | + --train_batch_size 4 \ |
| 123 | + --eval_batch_size 8 \ |
| 124 | + --grad_accumulation_steps 2 \ |
| 125 | + --log_per_updates 25 \ |
| 126 | + --save_per_updates 1000 \ |
| 127 | + --eval_while_train True \ |
| 128 | + --test_while_train True \ |
| 129 | + --max_seq_len 50 \ |
| 130 | + --silent True |
| 131 | +``` |
| 132 | + |
| 133 | +For knowing about the ``train.py`` script and its arguments, refer [running train](https://multi-task-nlp.readthedocs.io/en/latest/training.html#running-train) |
| 134 | + |
| 135 | + |
| 136 | +## How to Infer? |
| 137 | + |
| 138 | +Once you have a multi-task model trained on your tasks, we provide a convenient and easy way to use it for getting |
| 139 | +predictions on samples through the **inference pipeline**. |
| 140 | + |
| 141 | +For running inference on samples using a trained model for say TaskA, TaskB and TaskC, |
| 142 | +you can import ``InferPipeline`` class and load the corresponding multi-task model by making an object of this class. |
| 143 | + |
| 144 | +```python |
| 145 | +>>> from infer_pipeline import inferPipeline |
| 146 | +>>> pipe = inferPipeline(modelPath = 'sample_out_dir/multi_task_model.pt', maxSeqLen = 50) |
| 147 | +``` |
| 148 | + |
| 149 | +``infer`` function can be called to get the predictions for input samples |
| 150 | +for the mentioned tasks. |
| 151 | + |
| 152 | +```python |
| 153 | +>>> samples = [ ['sample_sentence_1'], ['sample_sentence_2'] ] |
| 154 | +>>> tasks = ['TaskA', 'TaskB'] |
| 155 | +>>> pipe.infer(samples, tasks) |
| 156 | +``` |
| 157 | + |
| 158 | +For knowing about the ``infer_pipeline``, refer [infer](https://multi-task-nlp.readthedocs.io/en/latest/infering.html). |
| 159 | + |
| 160 | +## Examples |
| 161 | + |
| 162 | +Here you can find various conversational AI tasks as examples and can train multi-task models |
| 163 | +in simple steps mentioned in the notebooks. |
| 164 | + |
| 165 | +### Example-1 Intent detection, NER, Fragment detection |
| 166 | + |
| 167 | +**Tasks Description** |
| 168 | + |
| 169 | +``Intent Detection`` :- This is a single sentence classification task where an `intent` specifies which class the data sample belongs to. |
| 170 | + |
| 171 | +``NER`` :- This is a Named Entity Recognition/ Sequence Labelling/ Slot filling task where individual words of the sentence are tagged with an entity label it belongs to. The words which don't belong to any entity label are simply labeled as "O". |
| 172 | + |
| 173 | +``Fragment Detection`` :- This is modeled as a single sentence classification task which detects whether a sentence is incomplete (fragment) or not (non-fragment). |
| 174 | + |
| 175 | +**Conversational Utility** :- Intent detection is one of the fundamental components for conversational system as it gives a broad understand of the category/domain the sentence/query belongs to. |
| 176 | + |
| 177 | +NER helps in extracting values for required entities (eg. location, date-time) from query. |
| 178 | + |
| 179 | +Fragment detection is a very useful piece in conversational system as knowing if a query/sentence is incomplete can aid in discarding bad queries beforehand. |
| 180 | + |
| 181 | +**Data** :- In this example, we are using the [SNIPS](https://snips-nlu.readthedocs.io/en/latest/dataset.html) data for intent and entity detection. For the sake of simplicity, we provide |
| 182 | +the data in simpler form under ``snips_data`` directory taken from [here](https://github.com/LeePleased/StackPropagation-SLU/tree/master/data/snips>). |
| 183 | + |
| 184 | +**Transform file** :- [transform_file_snips](https://github.com/hellohaptik/multi-task-NLP/blob/master/examples/intent_ner_fragment/transform_file_snips.yml) |
| 185 | + |
| 186 | +**Tasks file** :- [tasks_file_snips](https://github.com/hellohaptik/multi-task-NLP/blob/master/examples/intent_ner_fragment/tasks_file_snips.yml) |
| 187 | + |
| 188 | +**Notebook** :- [intent_ner_fragment](https://github.com/hellohaptik/multi-task-NLP/blob/master/examples/intent_ner_fragment/intent_ner_fragment.ipynb) |
| 189 | + |
| 190 | +### Example-2 Entailment detection |
| 191 | + |
| 192 | +**Tasks Description** |
| 193 | + |
| 194 | +``Entailment`` :- This is a sentence pair classification task which determines whether the second sentence in a sample can be inferred from the first. |
| 195 | + |
| 196 | +**Conversational Utility** :- In conversational AI context, this task can be seen as determining whether the second sentence is similar to first or not. |
| 197 | +Additionally, the probability score can also be used as a similarity score between the sentences. |
| 198 | + |
| 199 | +**Data** :- In this example, we are using the [SNLI](https://nlp.stanford.edu/projects/snli) data which is having sentence pairs and labels. |
| 200 | + |
| 201 | +**Transform file** :- [transform_file_snli](https://github.com/hellohaptik/multi-task-NLP/tree/master/examples/entailment_detection/transform_file_snli.yml) |
| 202 | + |
| 203 | +**Tasks file** :- [tasks_file_snli](https://github.com/hellohaptik/multi-task-NLP/tree/master/examples/entailment_detection/tasks_file_snli.yml) |
| 204 | + |
| 205 | +**Notebook** :- [entailment_snli](https://github.com/hellohaptik/multi-task-NLP/tree/master/examples/entailment_detection/entailment_snli.ipynb) |
| 206 | + |
| 207 | +### Example-3 Answerability detection |
| 208 | + |
| 209 | +**Tasks Description** |
| 210 | + |
| 211 | +``answerability`` :- This is modeled as a sentence pair classification task where the first sentence is a query and second sentence is a context passage. |
| 212 | +The objective of this task is to determine whether the query can be answered from the context passage or not. |
| 213 | + |
| 214 | +**Conversational Utility** :- This can be a useful component for building a question-answering/ machine comprehension based system. |
| 215 | +In such cases, it becomes very important to determine whether the given query can be answered with given context passage or not before extracting/abstracting an answer from it. |
| 216 | +Performing question-answering for a query which is not answerable from the context, could lead to incorrect answer extraction. |
| 217 | + |
| 218 | +**Data** :- In this example, we are using the [MSMARCO_triples](https://msmarco.blob.core.windows.net/msmarcoranking/triples.train.small.tar.gz") data which is having sentence pairs and labels. |
| 219 | +The data contains triplets where the first entry is the query, second one is the context passage from which the query can be answered (positive passage) , while the third entry is a context |
| 220 | +passage from which the query cannot be answered (negative passage). |
| 221 | + |
| 222 | +Data is transformed into sentence pair classification format, with query-positive context pair labeled as 1 (answerable) and query-negative context pair labeled as 0 (non-answerable) |
| 223 | + |
| 224 | +**Transform file** :- [transform_file_answerability](https://github.com/hellohaptik/multi-task-NLP/tree/master/examples/answerability_detection/transform_file_answerability.yml) |
| 225 | + |
| 226 | +**Tasks file** :- [tasks_file_answerability](https://github.com/hellohaptik/multi-task-NLP/tree/master/examples/answerability_detection/tasks_file_answerability.yml) |
| 227 | + |
| 228 | +**Notebook** :- [answerability_detection_msmarco](https://github.com/hellohaptik/multi-task-NLP/tree/master/examples/answerability_detection/answerability_detection_msmarco.ipynb) |
0 commit comments