Skip to content

Commit 212c5ab

Browse files
authored
Update how-to-inference-server-http.md
1 parent 398270e commit 212c5ab

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

articles/machine-learning/how-to-inference-server-http.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -179,7 +179,7 @@ There are two ways to use Visual Studio Code (VS Code) and [Python Extension](ht
179179
1. Start debugging session in VS Code.
180180
1. In VS Code, select "Run" -> "Start Debugging" (or `F5`).
181181
1. Enter the process ID of the `azmlinfsrv` (not the `gunicorn`) using the logs (from the inference server) displayed in the CLI.
182-
:::image type="content" source="./media/how-to-inference-server-http/debug-attach-pid.png" alt-text="Screenshot of the CLI which shows the process ID of the server":::
182+
:::image type="content" source="./media/how-to-inference-server-http/debug-attach-pid.png" alt-text="Screenshot of the CLI which shows the process ID of the server.":::
183183
> [!NOTE]
184184
> If the process picker does not display, manually enter the process ID in the `processId` field of the `launch.json`.
185185
@@ -268,7 +268,7 @@ The following steps explain how the Azure Machine Learning inference HTTP server
268268
1. The requests are then handled by a [Flask](https://flask.palletsprojects.com/) app, which loads the entry script & any dependencies.
269269
1. Finally, the request is sent to your entry script. The entry script then makes an inference call to the loaded model and returns a response.
270270
271-
:::image type="content" source="./media/how-to-inference-server-http/inference-server-architecture.png" alt-text="Diagram of the HTTP server process":::
271+
:::image type="content" source="./media/how-to-inference-server-http/inference-server-architecture.png" alt-text="Diagram of the HTTP server process.":::
272272
273273
## Understanding logs
274274

0 commit comments

Comments
 (0)