Skip to content
This repository was archived by the owner on Jun 3, 2025. It is now read-only.

Commit b509acf

Browse files
committed
fix DeepSparse product example server config
1 parent be779c5 commit b509acf

File tree

1 file changed

+8
-8
lines changed

1 file changed

+8
-8
lines changed

src/content/products/deepsparse.mdx

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -84,14 +84,14 @@ To serve multiple models in your deployment you can easily build a `config.yaml`
8484
num_cores: 1
8585
num_workers: 1
8686
endpoints:
87-
- task: question_answering
88-
route: /predict/question_answering/base
89-
model: zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/base-none
90-
batch_size: 1
91-
- task: question_answering
92-
route: /predict/question_answering/pruned_quant
93-
model: zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/12layer_pruned80_quant-none-vnni
94-
batch_size: 1
87+
- task: question_answering
88+
route: /predict/question_answering/base
89+
model: zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/base-none
90+
batch_size: 1
91+
- task: question_answering
92+
route: /predict/question_answering/pruned_quant
93+
model: zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/12layer_pruned80_quant-none-vnni
94+
batch_size: 1
9595
```
9696
9797
Finally, after your `config.yaml` file is built, run the server with the config file path as an argument:

0 commit comments

Comments
 (0)