You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[compat] Introduce Transformers v5.2 compatibility: trainer _nested_gather moved (#3664)
* Introduce Transformers v5.2 compatibility: trainer _nested_gather moved
* Replace prajjwal1/bert-tiny due to issues loading with AutoConfig
* Disable the transformers progress bars in the CI
The weight loading progress bars heavily expand the logs
Copy file name to clipboardExpand all lines: tests/cross_encoder/test_model_card.py
+7-7Lines changed: 7 additions & 7 deletions
Original file line number
Diff line number
Diff line change
@@ -46,7 +46,7 @@ def dummy_dataset():
46
46
"- sentence-transformers",
47
47
"- cross-encoder",
48
48
"pipeline_tag: text-ranking",
49
-
"This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny)",
49
+
"This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [sentence-transformers-testing/stsb-bert-tiny-safetensors](https://huggingface.co/sentence-transformers-testing/stsb-bert-tiny-safetensors)",
"It computes scores for pairs of texts, which can be used for text reranking and semantic search.",
52
52
"**Maximum Sequence Length:** 512 tokens",
@@ -71,7 +71,7 @@ def dummy_dataset():
71
71
"- sentence-transformers",
72
72
"- cross-encoder",
73
73
"pipeline_tag: text-classification",
74
-
"This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny)",
74
+
"This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [sentence-transformers-testing/stsb-bert-tiny-safetensors](https://huggingface.co/sentence-transformers-testing/stsb-bert-tiny-safetensors)",
"It computes scores for pairs of texts, which can be used for text pair classification.",
77
77
"**Maximum Sequence Length:** 512 tokens",
@@ -91,15 +91,15 @@ def dummy_dataset():
91
91
1,
92
92
1,
93
93
[
94
-
"This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) on the train_0 dataset using the [sentence-transformers](https://www.SBERT.net) library.",
94
+
"This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [sentence-transformers-testing/stsb-bert-tiny-safetensors](https://huggingface.co/sentence-transformers-testing/stsb-bert-tiny-safetensors) on the train_0 dataset using the [sentence-transformers](https://www.SBERT.net) library.",
95
95
"#### train_0",
96
96
],
97
97
),
98
98
(
99
99
2,
100
100
1,
101
101
[
102
-
"This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) on the train_0 and train_1 datasets using the [sentence-transformers](https://www.SBERT.net) library.",
102
+
"This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [sentence-transformers-testing/stsb-bert-tiny-safetensors](https://huggingface.co/sentence-transformers-testing/stsb-bert-tiny-safetensors) on the train_0 and train_1 datasets using the [sentence-transformers](https://www.SBERT.net) library.",
103
103
"#### train_0",
104
104
"#### train_1",
105
105
],
@@ -108,7 +108,7 @@ def dummy_dataset():
108
108
10,
109
109
1,
110
110
[
111
-
"This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) on the train_0, train_1, train_2, train_3, train_4, train_5, train_6, train_7, train_8 and train_9 datasets using the [sentence-transformers](https://www.SBERT.net) library.",
111
+
"This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [sentence-transformers-testing/stsb-bert-tiny-safetensors](https://huggingface.co/sentence-transformers-testing/stsb-bert-tiny-safetensors) on the train_0, train_1, train_2, train_3, train_4, train_5, train_6, train_7, train_8 and train_9 datasets using the [sentence-transformers](https://www.SBERT.net) library.",
112
112
"<details><summary>train_0</summary>", # We start using <details><summary> if we have more than 3 datasets
"This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) on 50 datasets using the [sentence-transformers](https://www.SBERT.net) library.",
123
+
"This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [sentence-transformers-testing/stsb-bert-tiny-safetensors](https://huggingface.co/sentence-transformers-testing/stsb-bert-tiny-safetensors) on 50 datasets using the [sentence-transformers](https://www.SBERT.net) library.",
0 commit comments