You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[DeBERTa](https://huggingface.co/papers/2006.03654) improves the pretraining efficiency of BERT and RoBERTa with two key ideas, disentangled attention and an enhanced mask decoder. Instead of mixing everything together like BERT, DeBERTa separates a word's *content* from its *position* and processes them independently. This gives it a clearer sense of what's being said and where in the sentence it's happening.
28
+
29
+
The enhanced mask decoder replaces the traditional softmax decoder to make better predictions.
30
+
31
+
Even with less training data than RoBERTa, DeBERTa manages to outperform it on several benchmarks.
32
+
33
+
You can find all the original DeBERTa checkpoints under the [Microsoft](https://huggingface.co/microsoft?search_models=deberta) organization.
34
+
23
35
24
-
## Overview
36
+
> [!TIP]
37
+
> Click on the DeBERTa models in the right sidebar for more examples of how to apply DeBERTa to different language tasks.
25
38
26
-
The DeBERTa model was proposed in [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://huggingface.co/papers/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen It is based on Google's
27
-
BERT model released in 2018 and Facebook's RoBERTa model released in 2019.
39
+
The example below demonstrates how to classify text with [`Pipeline`], [`AutoModel`], and from the command line.
28
40
29
-
It builds on RoBERTa with disentangled attention and enhanced mask decoder training with half of the data used in
30
-
RoBERTa.
41
+
<hfoptionsid="usage">
42
+
<hfoptionid="Pipeline">
31
43
32
-
The abstract from the paper is the following:
44
+
```py
45
+
import torch
46
+
from transformers import pipeline
33
47
34
-
*Recent progress in pre-trained neural language models has significantly improved the performance of many natural
35
-
language processing (NLP) tasks. In this paper we propose a new model architecture DeBERTa (Decoding-enhanced BERT with
36
-
disentangled attention) that improves the BERT and RoBERTa models using two novel techniques. The first is the
37
-
disentangled attention mechanism, where each word is represented using two vectors that encode its content and
38
-
position, respectively, and the attention weights among words are computed using disentangled matrices on their
39
-
contents and relative positions. Second, an enhanced mask decoder is used to replace the output softmax layer to
40
-
predict the masked tokens for model pretraining. We show that these two techniques significantly improve the efficiency
41
-
of model pretraining and performance of downstream tasks. Compared to RoBERTa-Large, a DeBERTa model trained on half of
42
-
the training data performs consistently better on a wide range of NLP tasks, achieving improvements on MNLI by +0.9%
43
-
(90.2% vs. 91.1%), on SQuAD v2.0 by +2.3% (88.4% vs. 90.7%) and RACE by +3.6% (83.2% vs. 86.8%). The DeBERTa code and
44
-
pre-trained models will be made publicly available at https://github.com/microsoft/DeBERTa.*
48
+
classifier = pipeline(
49
+
task="text-classification",
50
+
model="microsoft/deberta-base-mnli",
51
+
device=0,
52
+
)
45
53
54
+
classifier({
55
+
"text": "A soccer game with multiple people playing.",
56
+
"text_pair": "Some people are playing a sport."
57
+
})
58
+
```
46
59
47
-
This model was contributed by [DeBERTa](https://huggingface.co/DeBERTa). This model TF 2.0 implementation was
48
-
contributed by [kamalkraj](https://huggingface.co/kamalkraj) . The original code can be found [here](https://github.com/microsoft/DeBERTa).
60
+
</hfoption>
61
+
<hfoptionid="AutoModel">
49
62
50
-
## Resources
63
+
```py
64
+
import torch
65
+
from transformers import AutoModelForSequenceClassification, AutoTokenizer
51
66
52
-
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DeBERTa. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
model = AutoModelForSequenceClassification.from_pretrained("microsoft/deberta-base-mnli", device_map="auto")
53
70
54
-
<PipelineTagpipeline="text-classification"/>
71
+
inputs = tokenizer(
72
+
"A soccer game with multiple people playing.",
73
+
"Some people are playing a sport.",
74
+
return_tensors="pt"
75
+
).to("cuda")
55
76
56
-
- A blog post on how to [Accelerate Large Model Training using DeepSpeed](https://huggingface.co/blog/accelerate-deepspeed) with DeBERTa.
57
-
- A blog post on [Supercharged Customer Service with Machine Learning](https://huggingface.co/blog/supercharge-customer-service-with-machine-learning) with DeBERTa.
58
-
-[`DebertaForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb).
59
-
-[`TFDebertaForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb).
print(f"The predicted relation is: {labels[predicted_class]}")
63
83
64
-
-[`DebertaForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification.ipynb).
65
-
-[`TFDebertaForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb).
66
-
-[Token classification](https://huggingface.co/course/chapter7/2?fw=pt) chapter of the 🤗 Hugging Face Course.
67
-
-[Byte-Pair Encoding tokenization](https://huggingface.co/course/chapter6/5?fw=pt) chapter of the 🤗 Hugging Face Course.
-[`DebertaForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#robertabertdistilbert-and-masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb).
73
-
-[`TFDebertaForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_mlmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb).
74
-
-[Masked language modeling](https://huggingface.co/course/chapter7/3?fw=pt) chapter of the 🤗 Hugging Face Course.
75
-
-[Masked language modeling task guide](../tasks/masked_language_modeling)
89
+
```bash
90
+
echo -e '{"text": "A soccer game with multiple people playing.", "text_pair": "Some people are playing a sport."}'| transformers run --task text-classification --model microsoft/deberta-base-mnli --device 0
91
+
```
76
92
77
-
<PipelineTagpipeline="question-answering"/>
93
+
</hfoption>
94
+
</hfoptions>
78
95
79
-
-[`DebertaForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb).
80
-
-[`TFDebertaForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb).
81
-
-[Question answering](https://huggingface.co/course/chapter7/7?fw=pt) chapter of the 🤗 Hugging Face Course.
-DeBERTa uses **relative position embeddings**, so it does not require **right-padding** like BERT.
98
+
-For best results, use DeBERTa on sentence-level or sentence-pair classification tasks like MNLI, RTE, or SST-2.
99
+
-If you're using DeBERTa for token-level tasks like masked language modeling, make sure to load a checkpoint specifically pretrained or fine-tuned for token-level tasks.
0 commit comments