You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
| Update local Python environment **without** Docker image rebuild | Yes | No |
38
38
| Update scoring script | Yes | Yes |
@@ -130,7 +130,7 @@ Now you can modify the scoring script (`score.py`) and test your changes by runn
130
130
131
131
There are two ways to use Visual Studio Code (VS Code) and [Python Extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) to debug with [azureml-inference-server-http](https://pypi.org/project/azureml-inference-server-http/) package ([Launch and Attach modes](https://code.visualstudio.com/docs/editor/debugging#_launch-versus-attach-configurations)).
132
132
133
-
- **Launch mode**: set up the `launch.json`in VS Code and start the AzureML Inference HTTP Server within VS Code.
133
+
- **Launch mode**: set up the `launch.json`in VS Code and start the AzureML inference HTTP server within VS Code.
134
134
1. Start VS Code and open the folder containing the script (`score.py`).
135
135
1. Add the following configuration to `launch.json`forthat workspacein VS Code:
136
136
@@ -155,7 +155,7 @@ There are two ways to use Visual Studio Code (VS Code) and [Python Extension](ht
155
155
156
156
1. Start debugging session in VS Code. Select "Run" ->"Start Debugging" (or `F5`).
157
157
158
-
- **Attach mode**: start the AzureML Inference HTTP Serverin a command line and use VS Code + Python Extension to attach to the process.
158
+
- **Attach mode**: start the AzureML inference HTTP serverin a command line and use VS Code + Python Extension to attach to the process.
159
159
> [!NOTE]
160
160
> If you're using Linux environment, first install the `gdb` package by running `sudo apt-get install -y gdb`.
161
161
1. Add the following configuration to `launch.json` for that workspace in VS Code:
@@ -272,7 +272,7 @@ The following steps explain how the Azure Machine Learning inference HTTP server
272
272
273
273
## Understanding logs
274
274
275
-
Here we describe logs of the AzureML Inference HTTP Server. You can get the log when you run the `azureml-inference-server-http` locally, or [get container logs](how-to-troubleshoot-online-endpoints.md#get-container-logs) if you're using online endpoints.
275
+
Here we describe logs of the AzureML inference HTTP server. You can get the log when you run the `azureml-inference-server-http` locally, or [get container logs](how-to-troubleshoot-online-endpoints.md#get-container-logs) if you're using online endpoints.
276
276
277
277
> [!NOTE]
278
278
> The logging format has changed since version 0.8.0. If you find your log in different style, update the `azureml-inference-server-http` package to the latest version.
0 commit comments