You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-inference-server-http.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -175,13 +175,13 @@ There are two ways to use Visual Studio Code (VS Code) and [Python Extension](ht
175
175
]
176
176
}
177
177
```
178
-
2. Start the inference server using CLI (`azmlinfsrv --entry_script score.py`).
179
-
3. Start debugging session in VS Code.
178
+
1. Start the inference server using CLI (`azmlinfsrv --entry_script score.py`).
179
+
1. Start debugging session in VS Code.
180
180
1. In VS Code, select "Run" -> "Start Debugging" (or `F5`).
181
-
2. Enter the process ID of the `azmlinfsrv` (not the `gunicorn`) using the logs (from the inference server) displayed in the CLI.
181
+
1. Enter the process ID of the `azmlinfsrv` (not the `gunicorn`) using the logs (from the inference server) displayed in the CLI.
182
182
:::image type="content" source="./media/how-to-inference-server-http/debug-attach-pid.png" alt-text="Screenshot of the CLI which shows the process ID of the server":::
183
183
> [!NOTE]
184
-
> If you're using Linux environment, install `gdb` package
184
+
> If the process picker does not display, manually enter the process ID in the `processId` field of the `launch.json`.
185
185
186
186
In both ways, you can set [breakpoint](https://code.visualstudio.com/docs/editor/debugging#_breakpoints) and debug step by step.
0 commit comments