Skip to content

Commit 5d0e049

Browse files
committed
Update code and text
1 parent 90f74b7 commit 5d0e049

File tree

1 file changed

+22
-12
lines changed

1 file changed

+22
-12
lines changed

articles/machine-learning/how-to-inference-server-http.md

Lines changed: 22 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -222,11 +222,11 @@ The following procedure runs the server locally with [sample files](https://gith
222222

223223
1. Create and activate a virtual environment with [conda](https://conda.io/projects/conda/en/latest/user-guide/getting-started.html):
224224

225-
In this example, the `azureml-inference-server-http` package is automatically installed. The package is included as a dependent library of the `azureml-defaults` package in the _conda.yml_ file:
225+
In this example, the `azureml-inference-server-http` package is automatically installed. The package is included as a dependent library of the `azureml-defaults` package in the _conda.yaml_ file:
226226

227227
```bash
228228
# Create the environment from the YAML file
229-
conda env create --name model-env -f ./environment/conda.yml
229+
conda env create --name model-env -f ./environment/conda.yaml
230230
# Activate the new environment
231231
conda activate model-env
232232
```
@@ -318,19 +318,24 @@ There are two ways to obtain log data for the inference HTTP server test:
318318
When the server starts, the logs show the initial server settings as follows:
319319

320320
```console
321-
Azure Machine Learning Inferencing HTTP server <version>
321+
Azure ML Inferencing HTTP server <version>
322+
322323

323324
Server Settings
324325
---------------
325-
Entry Script Name: <entry_script>
326-
Model Directory: <model_dir>
327-
Worker Count: <worker_count>
326+
Entry Script Name: <entry-script>
327+
Model Directory: <model-directory>
328+
Config File: <configuration-file>
329+
Worker Count: <worker-count>
328330
Worker Timeout (seconds): None
329331
Server Port: <port>
332+
Health Port: <port>
330333
Application Insights Enabled: false
331-
Application Insights Key: <appinsights_instrumentation_key>
334+
Application Insights Key: <Application-Insights-instrumentation-key>
332335
Inferencing HTTP server version: azmlinfsrv/<version>
333-
CORS for the specified origins: <access_control_allow_origins>
336+
CORS for the specified origins: <access-control-allow-origins>
337+
Create dedicated endpoint for health: <health-check-endpoint>
338+
334339

335340
Server Routes
336341
---------------
@@ -343,19 +348,23 @@ Score: POST 127.0.0.1:<port>/score
343348
For example, when you launch the server by following the [end-to-end example](#use-an-end-to-end-example), the log displays as follows:
344349

345350
```console
346-
Azure Machine Learning Inferencing HTTP server v0.8.0
351+
Azure ML Inferencing HTTP server v1.2.2
352+
347353

348354
Server Settings
349355
---------------
350356
Entry Script Name: /home/user-name/azureml-examples/cli/endpoints/online/model-1/onlinescoring/score.py
351357
Model Directory: ./
358+
Config File: None
352359
Worker Count: 1
353360
Worker Timeout (seconds): None
354361
Server Port: 5001
362+
Health Port: 5001
355363
Application Insights Enabled: false
356364
Application Insights Key: None
357-
Inferencing HTTP server version: azmlinfsrv/0.8.0
365+
Inferencing HTTP server version: azmlinfsrv/1.2.2
358366
CORS for the specified origins: None
367+
Create dedicated endpoint for health: None
359368

360369
Server Routes
361370
---------------
@@ -373,14 +382,15 @@ Initializing logger
373382
2022-12-24 07:37:54,518 I [32756] azmlinfsrv.user_script - Invoking user's init function
374383
2022-12-24 07:37:55,974 I [32756] azmlinfsrv.user_script - Users's init has completed successfully
375384
2022-12-24 07:37:55,976 I [32756] azmlinfsrv.swagger - Swaggers are prepared for the following versions: [2, 3, 3.1].
376-
2022-12-24 07:37:55,977 I [32756] azmlinfsrv - AML_FLASK_ONE_COMPATIBILITY is set, but patching is not necessary.
385+
2022-12-24 07:37:55,976 I [32756] azmlinfsrv - Scoring timeout is set to 3600000
386+
2022-12-24 07:37:55,976 I [32756] azmlinfsrv - Worker with pid 32756 ready for serving traffic
377387
```
378388

379389
### Understand log data format
380390

381391
All logs from the inference HTTP server, except for the launcher script, present data in the following format:
382392

383-
`<UTC Time> | <level> [<pid>] <logger name> - <message>`
393+
`<UTC Time> <level> [<pid>] <logger name> - <message>`
384394

385395
The entry consists of the following components:
386396

0 commit comments

Comments
 (0)