Skip to content

Commit 15058cb

Browse files
committed
add description
1 parent bdc5e9a commit 15058cb

File tree

1 file changed

+6
-3
lines changed

1 file changed

+6
-3
lines changed

articles/machine-learning/how-to-inference-server-http.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -34,10 +34,10 @@ The following table provides an overview of scenarios to help you choose what wo
3434

3535
| Scenario | Inference HTTP Server | Local endpoint |
3636
| ----------------------------------------------------------------------- | --------------------- | -------------- |
37-
| Update local Python environment **without** Docker image rebuild | Yes | No |
37+
| Update local Python environment **without** Docker image rebuild | Yes | No |
3838
| Update scoring script | Yes | Yes |
3939
| Update deployment configurations (deployment, environment, code, model) | No | Yes |
40-
| Integrate VS Code Debugger | Yes | Yes |
40+
| Integrate VS Code Debugger | Yes | Yes |
4141

4242
By running the inference HTTP server locally, you can focus on debugging your scoring script without being affected by the deployment container configurations.
4343

@@ -61,7 +61,10 @@ python -m pip install azureml-inference-server-http
6161
```
6262

6363
## Debug your scoring script locally
64-
### Understand the server behavior with a dummy scoring script
64+
65+
To debug your scoring script locally, you can test how the server behaves with a dummy scoring script, use VS Code to debug with the [azureml-inference-server-http](https://pypi.org/project/azureml-inference-server-http/) package, or test the server with an actual scoring script, model file, and environment file in our examples.
66+
67+
### Test the server behavior with a dummy scoring script
6568
1. Create a directory to hold your files:
6669

6770
```bash

0 commit comments

Comments
 (0)