Skip to content

Commit 2f635ef

Browse files
committed
fix
1 parent 3bcc8da commit 2f635ef

File tree

1 file changed

+3
-1
lines changed

1 file changed

+3
-1
lines changed

articles/ai-foundry/how-to/deploy-models-managed-pay-go.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -75,6 +75,8 @@ The consumption-based surcharge is accrued to the associated SaaS subscription a
7575

7676
## Subscribe and deploy on managed compute
7777

78+
[!INCLUDE [tip-left-pane](../includes/tip-left-pane.md)]
79+
7880
1. Sign in to [Azure AI Foundry](https://ai.azure.com/?cid=learnDocs).
7981
1. If you're not already in your project, select it.
8082
1. Select **Model catalog** from the left pane.
@@ -172,7 +174,7 @@ You can also consume NIM deployments using the [Azure AI Foundry Models SDK](/py
172174
The following NVIDIA NIMs of **chat completions** task type in the model catalog can be used to [create and run agents using Agent Service](/python/api/overview/azure/ai-projects-readme#agents-preview) using various supported tools, with the following two extra requirements:
173175

174176
1. Create a _Serverless Connection_ to the project using the NIM endpoint and Key. The target URL for the NIM endpoint in the connection should be `https://<endpoint-name>.region.inference.ml.azure.com/v1/`.
175-
2. Set the _model parameter_ in the request body to be of the form, `https://<endpoint>.region.inference.ml.azure.com/v1/@<parameter value per table below>` while creating and running agents.
177+
1. Set the _model parameter_ in the request body to be of the form, `https://<endpoint>.region.inference.ml.azure.com/v1/@<parameter value per table below>` while creating and running agents.
176178

177179

178180
| NVIDIA NIM | `model` parameter value |

0 commit comments

Comments
 (0)