Skip to content

Commit 3f0d7ba

Browse files
authored
use lower case
1 parent f5b5041 commit 3f0d7ba

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

articles/machine-learning/how-to-inference-server-http.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ This article focuses on the Azure Machine Learning inference HTTP server.
3232

3333
The following table provides an overview of scenarios to help you choose what works best for you.
3434

35-
| Scenario | Inference HTTP Server | Local endpoint |
35+
| Scenario | Inference HTTP server | Local endpoint |
3636
| ----------------------------------------------------------------------- | --------------------- | -------------- |
3737
| Update local Python environment **without** Docker image rebuild | Yes | No |
3838
| Update scoring script | Yes | Yes |
@@ -130,7 +130,7 @@ Now you can modify the scoring script (`score.py`) and test your changes by runn
130130

131131
There are two ways to use Visual Studio Code (VS Code) and [Python Extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) to debug with [azureml-inference-server-http](https://pypi.org/project/azureml-inference-server-http/) package ([Launch and Attach modes](https://code.visualstudio.com/docs/editor/debugging#_launch-versus-attach-configurations)).
132132

133-
- **Launch mode**: set up the `launch.json` in VS Code and start the AzureML Inference HTTP Server within VS Code.
133+
- **Launch mode**: set up the `launch.json` in VS Code and start the AzureML inference HTTP server within VS Code.
134134
1. Start VS Code and open the folder containing the script (`score.py`).
135135
1. Add the following configuration to `launch.json` for that workspace in VS Code:
136136

@@ -155,7 +155,7 @@ There are two ways to use Visual Studio Code (VS Code) and [Python Extension](ht
155155

156156
1. Start debugging session in VS Code. Select "Run" -> "Start Debugging" (or `F5`).
157157

158-
- **Attach mode**: start the AzureML Inference HTTP Server in a command line and use VS Code + Python Extension to attach to the process.
158+
- **Attach mode**: start the AzureML inference HTTP server in a command line and use VS Code + Python Extension to attach to the process.
159159
> [!NOTE]
160160
> If you're using Linux environment, first install the `gdb` package by running `sudo apt-get install -y gdb`.
161161
1. Add the following configuration to `launch.json` for that workspace in VS Code:
@@ -272,7 +272,7 @@ The following steps explain how the Azure Machine Learning inference HTTP server
272272
273273
## Understanding logs
274274
275-
Here we describe logs of the AzureML Inference HTTP Server. You can get the log when you run the `azureml-inference-server-http` locally, or [get container logs](how-to-troubleshoot-online-endpoints.md#get-container-logs) if you're using online endpoints.
275+
Here we describe logs of the AzureML inference HTTP server. You can get the log when you run the `azureml-inference-server-http` locally, or [get container logs](how-to-troubleshoot-online-endpoints.md#get-container-logs) if you're using online endpoints.
276276

277277
> [!NOTE]
278278
> The logging format has changed since version 0.8.0. If you find your log in different style, update the `azureml-inference-server-http` package to the latest version.

0 commit comments

Comments
 (0)