Skip to content

Commit 05dfe22

Browse files
authored
Merge pull request #226961 from shohei1029/patch-10
move inference HTTP server article into online endpoints
2 parents 39f4a61 + 3f0d7ba commit 05dfe22

File tree

2 files changed

+6
-7
lines changed

2 files changed

+6
-7
lines changed

articles/machine-learning/how-to-inference-server-http.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ This article focuses on the Azure Machine Learning inference HTTP server.
3232

3333
The following table provides an overview of scenarios to help you choose what works best for you.
3434

35-
| Scenario | Inference HTTP Server | Local endpoint |
35+
| Scenario | Inference HTTP server | Local endpoint |
3636
| ----------------------------------------------------------------------- | --------------------- | -------------- |
3737
| Update local Python environment **without** Docker image rebuild | Yes | No |
3838
| Update scoring script | Yes | Yes |
@@ -130,7 +130,7 @@ Now you can modify the scoring script (`score.py`) and test your changes by runn
130130

131131
There are two ways to use Visual Studio Code (VS Code) and [Python Extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) to debug with [azureml-inference-server-http](https://pypi.org/project/azureml-inference-server-http/) package ([Launch and Attach modes](https://code.visualstudio.com/docs/editor/debugging#_launch-versus-attach-configurations)).
132132

133-
- **Launch mode**: set up the `launch.json` in VS Code and start the AzureML Inference HTTP Server within VS Code.
133+
- **Launch mode**: set up the `launch.json` in VS Code and start the AzureML inference HTTP server within VS Code.
134134
1. Start VS Code and open the folder containing the script (`score.py`).
135135
1. Add the following configuration to `launch.json` for that workspace in VS Code:
136136

@@ -155,7 +155,7 @@ There are two ways to use Visual Studio Code (VS Code) and [Python Extension](ht
155155

156156
1. Start debugging session in VS Code. Select "Run" -> "Start Debugging" (or `F5`).
157157

158-
- **Attach mode**: start the AzureML Inference HTTP Server in a command line and use VS Code + Python Extension to attach to the process.
158+
- **Attach mode**: start the AzureML inference HTTP server in a command line and use VS Code + Python Extension to attach to the process.
159159
> [!NOTE]
160160
> If you're using Linux environment, first install the `gdb` package by running `sudo apt-get install -y gdb`.
161161
1. Add the following configuration to `launch.json` for that workspace in VS Code:
@@ -272,7 +272,7 @@ The following steps explain how the Azure Machine Learning inference HTTP server
272272
273273
## Understanding logs
274274
275-
Here we describe logs of the AzureML Inference HTTP Server. You can get the log when you run the `azureml-inference-server-http` locally, or [get container logs](how-to-troubleshoot-online-endpoints.md#get-container-logs) if you're using online endpoints.
275+
Here we describe logs of the AzureML inference HTTP server. You can get the log when you run the `azureml-inference-server-http` locally, or [get container logs](how-to-troubleshoot-online-endpoints.md#get-container-logs) if you're using online endpoints.
276276

277277
> [!NOTE]
278278
> The logging format has changed since version 0.8.0. If you find your log in different style, update the `azureml-inference-server-http` package to the latest version.

articles/machine-learning/toc.yml

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -611,6 +611,8 @@
611611
href: how-to-monitor-online-endpoints.md
612612
- name: Debug online endpoints locally VS Code
613613
href: how-to-debug-managed-online-endpoints-visual-studio-code.md
614+
- name: Debug scoring script with inference HTTP server
615+
href: how-to-inference-server-http.md
614616
- name: Troubleshoot online endpoints
615617
href: how-to-troubleshoot-online-endpoints.md
616618
- name: Batch endpoints
@@ -651,9 +653,6 @@
651653
href: how-to-use-event-grid-batch.md
652654
- name: Use REST to deploy a model as batch endpoints
653655
href: how-to-deploy-batch-with-rest.md
654-
- name: Inference HTTP server
655-
displayName: local debug
656-
href: how-to-inference-server-http.md
657656
- name: Work with MLflow
658657
items:
659658
- name: Configure MLflow for Azure Machine Learning

0 commit comments

Comments
 (0)