Commit ea2bca3
authored
New nvtext::wordpiece_tokenizer APIs (rapidsai#17600)
Creates a new word-piece-tokenizer which replaces the existing subword-tokenizer in nvtext.
The subword-tokenizer logic is to split out and specialized to perform basic tokenizing with the word-piece logic only.
The normalizing part is already a separate API. The output will be a lists column of tokens only.
The first change is that the new API uses `wordpiece` instead of `subword`. Here are the 2 C++ API declarations:
```
std::unique_ptr<wordpiece_vocabulary> load_wordpiece_vocabulary(
cudf::strings_column_view const& input,
rmm::cuda_stream_view stream,
rmm::device_async_resource_ref mr);
```
The vocabulary is loaded as a strings column and the returned object can be used on multiple calls to the next API:
```
std::unique_ptr<cudf::column> wordpiece_tokenize(
cudf::strings_column_view const& input,
wordpiece_vocabulary const& vocabulary,
cudf::size_type max_words_per_row,
rmm::cuda_stream_view stream,
rmm::device_async_resource_ref mr);
```
This will return a lists column of integers which represent the tokens for each row. The `max_words_per_row` will stop the tokenizing process for each row once the number of input words (characters delimited by space) has been reached. This means you may get more tokens than `max_words_per_row` for a row if a single word produces multiple tokens.
Note, that this API expects the input string to already be normalized -- processed by the `nvtext::normalize_characters` API which is also being reworked in rapidsai#17818
The Python interface has the following pattern:
```
from cudf.core.wordpiece_tokenize import WordPieceVocabulary
input_string = .... # output of the normalizer
vocab_file = os.path.join(datadir, "bert_base_cased_sampled/vocab.txt")
vc = cudf.read_text(vocab_file, delimiter="\n", strip_delimiters=True)
wpt = WordPieceVocabulary(vc)
wpr = wpt.tokenize(input_string)
```
The output is a lists column of the tokens and no longer the tensor-data and metadata format.
If this format is needed, then we can consider a 3rd API that converts the output here to that format.
Closes rapidsai#17507
Authors:
- David Wendt (https://github.com/davidwendt)
Approvers:
- Shruti Shivakumar (https://github.com/shrshi)
- Basit Ayantunde (https://github.com/lamarrr)
- GALI PREM SAGAR (https://github.com/galipremsagar)
- Bradley Dice (https://github.com/bdice)
URL: rapidsai#176001 parent 32bdfb0 commit ea2bca3
File tree
16 files changed
+1900
-10
lines changed- cpp
- benchmarks/text
- include/nvtext
- src/text
- tests/text
- python
- cudf/cudf
- core
- column
- tests/text
- pylibcudf/pylibcudf
- libcudf/nvtext
- nvtext
- tests
16 files changed
+1900
-10
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
757 | 757 | | |
758 | 758 | | |
759 | 759 | | |
| 760 | + | |
760 | 761 | | |
761 | 762 | | |
762 | 763 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
1 | 1 | | |
2 | | - | |
| 2 | + | |
3 | 3 | | |
4 | 4 | | |
5 | 5 | | |
| |||
20 | 20 | | |
21 | 21 | | |
22 | 22 | | |
| 23 | + | |
23 | 24 | | |
24 | 25 | | |
25 | 26 | | |
| |||
57 | 58 | | |
58 | 59 | | |
59 | 60 | | |
60 | | - | |
| 61 | + | |
| 62 | + | |
| 63 | + | |
| 64 | + | |
61 | 65 | | |
62 | 66 | | |
63 | 67 | | |
| |||
83 | 87 | | |
84 | 88 | | |
85 | 89 | | |
| 90 | + | |
| 91 | + | |
| 92 | + | |
| 93 | + | |
| 94 | + | |
| 95 | + | |
| 96 | + | |
| 97 | + | |
| 98 | + | |
| 99 | + | |
| 100 | + | |
| 101 | + | |
| 102 | + | |
| 103 | + | |
| 104 | + | |
| 105 | + | |
| 106 | + | |
| 107 | + | |
| 108 | + | |
| 109 | + | |
| 110 | + | |
| 111 | + | |
| 112 | + | |
| 113 | + | |
| 114 | + | |
| 115 | + | |
| 116 | + | |
| 117 | + | |
| 118 | + | |
| 119 | + | |
| 120 | + | |
| 121 | + | |
| 122 | + | |
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
| 1 | + | |
| 2 | + | |
| 3 | + | |
| 4 | + | |
| 5 | + | |
| 6 | + | |
| 7 | + | |
| 8 | + | |
| 9 | + | |
| 10 | + | |
| 11 | + | |
| 12 | + | |
| 13 | + | |
| 14 | + | |
| 15 | + | |
| 16 | + | |
| 17 | + | |
| 18 | + | |
| 19 | + | |
| 20 | + | |
| 21 | + | |
| 22 | + | |
| 23 | + | |
| 24 | + | |
| 25 | + | |
| 26 | + | |
| 27 | + | |
| 28 | + | |
| 29 | + | |
| 30 | + | |
| 31 | + | |
| 32 | + | |
| 33 | + | |
| 34 | + | |
| 35 | + | |
| 36 | + | |
| 37 | + | |
| 38 | + | |
| 39 | + | |
| 40 | + | |
| 41 | + | |
| 42 | + | |
| 43 | + | |
| 44 | + | |
| 45 | + | |
| 46 | + | |
| 47 | + | |
| 48 | + | |
| 49 | + | |
| 50 | + | |
| 51 | + | |
| 52 | + | |
| 53 | + | |
| 54 | + | |
| 55 | + | |
| 56 | + | |
| 57 | + | |
| 58 | + | |
| 59 | + | |
| 60 | + | |
| 61 | + | |
| 62 | + | |
| 63 | + | |
| 64 | + | |
| 65 | + | |
| 66 | + | |
| 67 | + | |
| 68 | + | |
| 69 | + | |
| 70 | + | |
| 71 | + | |
| 72 | + | |
| 73 | + | |
| 74 | + | |
| 75 | + | |
| 76 | + | |
| 77 | + | |
| 78 | + | |
| 79 | + | |
| 80 | + | |
| 81 | + | |
| 82 | + | |
| 83 | + | |
| 84 | + | |
| 85 | + | |
| 86 | + | |
| 87 | + | |
| 88 | + | |
| 89 | + | |
| 90 | + | |
| 91 | + | |
| 92 | + | |
| 93 | + | |
| 94 | + | |
| 95 | + | |
| 96 | + | |
| 97 | + | |
| 98 | + | |
| 99 | + | |
| 100 | + | |
| 101 | + | |
| 102 | + | |
| 103 | + | |
| 104 | + | |
| 105 | + | |
| 106 | + | |
| 107 | + | |
| 108 | + | |
| 109 | + | |
| 110 | + | |
| 111 | + | |
| 112 | + | |
| 113 | + | |
| 114 | + | |
| 115 | + | |
| 116 | + | |
| 117 | + | |
| 118 | + | |
| 119 | + | |
| 120 | + | |
| 121 | + | |
| 122 | + | |
0 commit comments