@@ -234,19 +234,21 @@ the floating point labels are converted to integer labels according to the inter
234
234
corresponding to `very negative `, `negative `, `neutral `, `positive `, `very positive ` classes.
235
235
236
236
237
- +------------------+--------------------+------+-------------------------------------------------------------------------------------------------+-------------+--------+--------+-----------+
238
- | Task | Dataset | Lang | Model | Metric | Valid | Test | Downloads |
239
- +==================+====================+======+=================================================================================================+=============+========+========+===========+
240
- | Insult detection | `Insults `_ | En | :config: `English BERT <classifiers/insults_kaggle_bert.json> ` | ROC-AUC | 0.9327 | 0.8602 | 1.1 Gb |
241
- +------------------+--------------------+ +-------------------------------------------------------------------------------------------------+-------------+--------+--------+-----------+
242
- | Sentiment |`SST`_ | | :config: `5-classes SST on conversational BERT <classifiers/sentiment_sst_conv_bert.json> ` | Accuracy | 0.6293 | 0.6626 | 1.1 Gb |
243
- +------------------+--------------------+------+-------------------------------------------------------------------------------------------------+-------------+--------+--------+-----------+
244
- | Sentiment |`Twitter mokoron`_ | Ru | :config: `RuWiki+Lenta emb w/o preprocessing <classifiers/sentiment_twitter.json> ` | F1-macro | 0.9965 | 0.9961 | 6.2 Gb |
245
- + +--------------------+ +-------------------------------------------------------------------------------------------------+-------------+--------+--------+-----------+
246
- | |`RuSentiment`_ | | :config: `Multi-language BERT <classifiers/rusentiment_bert.json> ` | F1-weighted | 0.6787 | 0.7005 | 1.3 Gb |
247
- + + + +-------------------------------------------------------------------------------------------------+ +--------+--------+-----------+
248
- | | | | :config: `Conversational RuBERT <classifiers/rusentiment_convers_bert.json> ` | | 0.739 | 0.7724 | 1.5 Gb |
249
- +------------------+--------------------+------+-------------------------------------------------------------------------------------------------+-------------+--------+--------+-----------+
237
+ +------------------+----------------------+------+-------------------------------------------------------------------------------------------------+-------------+--------------+--------------+-----------+
238
+ | Task | Dataset | Lang | Model | Metric | Valid | Test | Downloads |
239
+ +==================+======================+======+=================================================================================================+=============+==============+==============+===========+
240
+ | Insult detection | `Insults `_ | En | :config: `English BERT <classifiers/insults_kaggle_bert.json> ` | ROC-AUC | 0.9327 | 0.8602 | 1.1 Gb |
241
+ +------------------+----------------------+ +-------------------------------------------------------------------------------------------------+-------------+--------------+--------------+-----------+
242
+ | Sentiment |`SST`_ | | :config: `5-classes SST on conversational BERT <classifiers/sentiment_sst_conv_bert.json> ` | Accuracy | 0.6293 | 0.6626 | 1.1 Gb |
243
+ +------------------+----------------------+------+-------------------------------------------------------------------------------------------------+-------------+--------------+--------------+-----------+
244
+ | Sentiment |`Twitter mokoron`_ | Ru | :config: `RuWiki+Lenta emb w/o preprocessing <classifiers/sentiment_twitter.json> ` | F1-macro | 0.9965 | 0.9961 | 6.2 Gb |
245
+ + +----------------------+ +-------------------------------------------------------------------------------------------------+-------------+--------------+--------------+-----------+
246
+ | |`RuSentiment`_ | | :config: `Multilingual BERT <classifiers/rusentiment_bert.json> ` | F1-weighted | 0.6787 | 0.7005 | 1.3 Gb |
247
+ + + + +-------------------------------------------------------------------------------------------------+ +--------------+--------------+-----------+
248
+ | | | | :config: `Conversational RuBERT <classifiers/rusentiment_convers_bert.json> ` | | 0.739 | 0.7724 | 1.5 Gb |
249
+ +------------------+----------------------+------+-------------------------------------------------------------------------------------------------+-------------+--------------+--------------+-----------+
250
+ | Topics | `DeepPavlov Topics `_ | En | :config: `Distil BERT base uncased <classifiers/topics_distilbert_base_uncased.json> ` | F1-w / F1-m | 0.877/0.830 | 0.878/0.831 | 0.7 Gb |
251
+ +------------------+----------------------+------+-------------------------------------------------------------------------------------------------+-------------+--------------+--------------+-----------+
250
252
251
253
.. _`DSTC 2` : http://camdial.org/~mh521/dstc/
252
254
.. _`Insults` : https://www.kaggle.com/c/detecting-insults-in-social-commentary
@@ -257,6 +259,7 @@ corresponding to `very negative`, `negative`, `neutral`, `positive`, `very posit
257
259
.. _`Yahoo-L31` : https://webscope.sandbox.yahoo.com/catalog.php?datatype=l
258
260
.. _`Yahoo-L6` : https://webscope.sandbox.yahoo.com/catalog.php?datatype=l
259
261
.. _`SST` : https://nlp.stanford.edu/sentiment/index.html
262
+ .. _`DeepPavlov Topics` : https://deeppavlov.ai/datasets/topics
260
263
261
264
GLUE Benchmark
262
265
--------------
0 commit comments