This repository was archived by the owner on Jun 3, 2025. It is now read-only.
File tree Expand file tree Collapse file tree 1 file changed +8
-8
lines changed Expand file tree Collapse file tree 1 file changed +8
-8
lines changed Original file line number Diff line number Diff line change @@ -84,14 +84,14 @@ To serve multiple models in your deployment you can easily build a `config.yaml`
84
84
num_cores : 1
85
85
num_workers : 1
86
86
endpoints :
87
- - task : question_answering
88
- route : /predict/question_answering/base
89
- model : zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/base-none
90
- batch_size : 1
91
- - task : question_answering
92
- route : /predict/question_answering/pruned_quant
93
- model : zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/12layer_pruned80_quant-none-vnni
94
- batch_size : 1
87
+ - task : question_answering
88
+ route : /predict/question_answering/base
89
+ model : zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/base-none
90
+ batch_size : 1
91
+ - task : question_answering
92
+ route : /predict/question_answering/pruned_quant
93
+ model : zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/12layer_pruned80_quant-none-vnni
94
+ batch_size : 1
95
95
` ` `
96
96
97
97
Finally, after your ` config.yaml` file is built, run the server with the config file path as an argument:
You can’t perform that action at this time.
0 commit comments