Skip to content

Commit 09cf91a

Browse files
authored
Add DeiT, Swin, and Yolos vision models (#262)
* Add support `DeiT` models * Add `Swin` models for image classification * Add support for `yolos` models * Add `YolosFeatureExtractor` * Remove unused import * Update list of supported models * Remove SAM for now Move SAM support to next release
1 parent f057317 commit 09cf91a

File tree

6 files changed

+195
-74
lines changed

6 files changed

+195
-74
lines changed

README.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -260,6 +260,7 @@ You can refine your search by selecting the task you're interested in (e.g., [te
260260
1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (from Salesforce) released with the paper [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong.
261261
1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
262262
1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
263+
1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou.
263264
1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko.
264265
1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) and a German version of DistilBERT.
265266
1. **[FLAN-T5](https://huggingface.co/docs/transformers/model_doc/flan-t5)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei
@@ -276,11 +277,13 @@ You can refine your search by selecting the task you're interested in (e.g., [te
276277
1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team.
277278
1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (from Facebook), released together with the paper [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov.
278279
1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer.
280+
1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (from Microsoft) released with the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo.
279281
1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
280282
1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (from Google AI) released in the repository [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
281283
1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
282284
1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
283285
1. **[Whisper](https://huggingface.co/docs/transformers/model_doc/whisper)** (from OpenAI) released with the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://cdn.openai.com/papers/whisper.pdf) by Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever.
284286
1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov.
287+
1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (from Huazhong University of Science & Technology) released with the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu.
285288

286289

docs/snippets/6_supported-models.snippet

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,7 @@
88
1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (from Salesforce) released with the paper [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong.
99
1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
1010
1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
11+
1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou.
1112
1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko.
1213
1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) and a German version of DistilBERT.
1314
1. **[FLAN-T5](https://huggingface.co/docs/transformers/model_doc/flan-t5)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei
@@ -24,10 +25,12 @@
2425
1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team.
2526
1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (from Facebook), released together with the paper [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov.
2627
1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer.
28+
1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (from Microsoft) released with the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo.
2729
1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
2830
1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (from Google AI) released in the repository [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
2931
1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
3032
1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
3133
1. **[Whisper](https://huggingface.co/docs/transformers/model_doc/whisper)** (from OpenAI) released with the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://cdn.openai.com/papers/whisper.pdf) by Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever.
3234
1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov.
35+
1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (from Huazhong University of Science & Technology) released with the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu.
3336

scripts/supported_models.py

Lines changed: 30 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -102,6 +102,12 @@
102102
'sileod/deberta-v3-base-tasksource-nli',
103103
'sileod/deberta-v3-large-tasksource-nli',
104104
],
105+
'deit': [
106+
'facebook/deit-tiny-distilled-patch16-224',
107+
'facebook/deit-small-distilled-patch16-224',
108+
'facebook/deit-base-distilled-patch16-224',
109+
'facebook/deit-base-distilled-patch16-384',
110+
],
105111
'detr': [
106112
'facebook/detr-resnet-50',
107113
'facebook/detr-resnet-101',
@@ -185,15 +191,27 @@
185191
'sentence-transformers/all-roberta-large-v1',
186192
'julien-c/EsperBERTo-small-pos',
187193
],
188-
'sam': [
189-
'facebook/sam-vit-base',
190-
'facebook/sam-vit-large',
191-
'facebook/sam-vit-huge',
192-
],
194+
# 'sam': [
195+
# 'facebook/sam-vit-base',
196+
# 'facebook/sam-vit-large',
197+
# 'facebook/sam-vit-huge',
198+
# ],
193199
'squeezebert': [
194200
'squeezebert/squeezebert-uncased',
195201
'squeezebert/squeezebert-mnli',
196202
],
203+
'swin': [
204+
'microsoft/swin-tiny-patch4-window7-224',
205+
'microsoft/swin-base-patch4-window7-224',
206+
'microsoft/swin-large-patch4-window12-384-in22k',
207+
'microsoft/swin-base-patch4-window7-224-in22k',
208+
'microsoft/swin-base-patch4-window12-384-in22k',
209+
'microsoft/swin-base-patch4-window12-384',
210+
'microsoft/swin-large-patch4-window7-224',
211+
'microsoft/swin-small-patch4-window7-224',
212+
'microsoft/swin-large-patch4-window7-224-in22k',
213+
'microsoft/swin-large-patch4-window12-384',
214+
],
197215
't5': [
198216
't5-small',
199217
't5-base',
@@ -260,6 +278,13 @@
260278
'openai/whisper-large',
261279
'openai/whisper-large-v2',
262280
],
281+
'yolos': [
282+
'hustvl/yolos-tiny',
283+
'hustvl/yolos-small',
284+
'hustvl/yolos-base',
285+
'hustvl/yolos-small-dwr',
286+
'hustvl/yolos-small-300',
287+
]
263288
}
264289

265290

src/models.js

Lines changed: 67 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2987,6 +2987,7 @@ export class LlamaForCausalLM extends LlamaPreTrainedModel {
29872987

29882988
//////////////////////////////////////////////////
29892989
export class ViTPreTrainedModel extends PreTrainedModel { }
2990+
export class ViTModel extends ViTPreTrainedModel { }
29902991
export class ViTForImageClassification extends ViTPreTrainedModel {
29912992
/**
29922993
* @param {any} model_inputs
@@ -2999,6 +3000,7 @@ export class ViTForImageClassification extends ViTPreTrainedModel {
29993000

30003001
//////////////////////////////////////////////////
30013002
export class MobileViTPreTrainedModel extends PreTrainedModel { }
3003+
export class MobileViTModel extends MobileViTPreTrainedModel { }
30023004
export class MobileViTForImageClassification extends MobileViTPreTrainedModel {
30033005
/**
30043006
* @param {any} model_inputs
@@ -3015,6 +3017,7 @@ export class MobileViTForImageClassification extends MobileViTPreTrainedModel {
30153017

30163018
//////////////////////////////////////////////////
30173019
export class DetrPreTrainedModel extends PreTrainedModel { }
3020+
export class DetrModel extends DetrPreTrainedModel { }
30183021
export class DetrForObjectDetection extends DetrPreTrainedModel {
30193022
/**
30203023
* @param {any} model_inputs
@@ -3066,6 +3069,60 @@ export class DetrSegmentationOutput extends ModelOutput {
30663069
//////////////////////////////////////////////////
30673070

30683071

3072+
//////////////////////////////////////////////////
3073+
export class DeiTPreTrainedModel extends PreTrainedModel { }
3074+
export class DeiTModel extends DeiTPreTrainedModel { }
3075+
export class DeiTForImageClassification extends DeiTPreTrainedModel {
3076+
/**
3077+
* @param {any} model_inputs
3078+
*/
3079+
async _call(model_inputs) {
3080+
return new SequenceClassifierOutput(await super._call(model_inputs));
3081+
}
3082+
}
3083+
//////////////////////////////////////////////////
3084+
3085+
//////////////////////////////////////////////////
3086+
export class SwinPreTrainedModel extends PreTrainedModel { }
3087+
export class SwinModel extends SwinPreTrainedModel { }
3088+
export class SwinForImageClassification extends SwinPreTrainedModel {
3089+
/**
3090+
* @param {any} model_inputs
3091+
*/
3092+
async _call(model_inputs) {
3093+
return new SequenceClassifierOutput(await super._call(model_inputs));
3094+
}
3095+
}
3096+
//////////////////////////////////////////////////
3097+
3098+
//////////////////////////////////////////////////
3099+
export class YolosPreTrainedModel extends PreTrainedModel { }
3100+
export class YolosModel extends YolosPreTrainedModel { }
3101+
export class YolosForObjectDetection extends YolosPreTrainedModel {
3102+
/**
3103+
* @param {any} model_inputs
3104+
*/
3105+
async _call(model_inputs) {
3106+
return new YolosObjectDetectionOutput(await super._call(model_inputs));
3107+
}
3108+
}
3109+
3110+
export class YolosObjectDetectionOutput extends ModelOutput {
3111+
/**
3112+
* @param {Object} output The output of the model.
3113+
* @param {Tensor} output.logits Classification logits (including no-object) for all queries.
3114+
* @param {Tensor} output.pred_boxes Normalized boxes coordinates for all queries, represented as (center_x, center_y, width, height).
3115+
* These values are normalized in [0, 1], relative to the size of each individual image in the batch (disregarding possible padding).
3116+
*/
3117+
constructor({ logits, pred_boxes }) {
3118+
super();
3119+
this.logits = logits;
3120+
this.pred_boxes = pred_boxes;
3121+
}
3122+
}
3123+
//////////////////////////////////////////////////
3124+
3125+
30693126
//////////////////////////////////////////////////
30703127
export class SamPreTrainedModel extends PreTrainedModel { }
30713128
export class SamModel extends SamPreTrainedModel {
@@ -3400,6 +3457,13 @@ const MODEL_MAPPING_NAMES_ENCODER_ONLY = new Map([
34003457
['squeezebert', SqueezeBertModel],
34013458
['wav2vec2', Wav2Vec2Model],
34023459

3460+
['detr', DetrModel],
3461+
['vit', ViTModel],
3462+
['mobilevit', MobileViTModel],
3463+
['deit', DeiTModel],
3464+
['swin', SwinModel],
3465+
['yolos', YolosModel],
3466+
34033467
['sam', SamModel], // TODO change to encoder-decoder when model is split correctly
34043468
]);
34053469

@@ -3495,10 +3559,13 @@ const MODEL_FOR_VISION_2_SEQ_MAPPING_NAMES = new Map([
34953559
const MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING_NAMES = new Map([
34963560
['vit', ViTForImageClassification],
34973561
['mobilevit', MobileViTForImageClassification],
3562+
['deit', DeiTForImageClassification],
3563+
['swin', SwinForImageClassification],
34983564
]);
34993565

35003566
const MODEL_FOR_OBJECT_DETECTION_MAPPING_NAMES = new Map([
35013567
['detr', DetrForObjectDetection],
3568+
['yolos', YolosForObjectDetection],
35023569
]);
35033570

35043571
const MODEL_FOR_IMAGE_SEGMENTATION_MAPPING_NAMES = new Map([

0 commit comments

Comments
 (0)