You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/model-inference/includes/create-model-deployments/cli.md
+20-23Lines changed: 20 additions & 23 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -40,7 +40,7 @@ To add a model, you first need to identify the model that you want to deploy. Yo
40
40
2. If you have more than 1 subscription, select the subscription where your resource is located:
41
41
42
42
```azurecli
43
-
az account set --subscription $subscriptionId>
43
+
az account set --subscription $subscriptionId
44
44
```
45
45
46
46
3. Set the following environment variables with the name of the Azure AI Services resource you plan to use and resource group.
@@ -98,6 +98,24 @@ To add a model, you first need to identify the model that you want to deploy. Yo
98
98
99
99
You can deploy the same model multiple times if needed as long as it's under a different deployment name. This capability might be useful in case you want to test different configurations for a given model, including content safety.
100
100
101
+
## Use the model
102
+
103
+
Deployed models in Azure AI model inference can be consumed using the [Azure AI model's inference endpoint](../../concepts/endpoints.md) for the resource. When constructing your request, indicate the parameter `model` and insert the model deployment name you have created. You can programmatically get the URI for the inference endpoint using the following code:
104
+
105
+
__Inference endpoint__
106
+
107
+
```azurecli
108
+
az cognitiveservices account show -n $accountName -g $resourceGroupName | jq '.properties.endpoints["Azure AI Model Inference API"]'
109
+
```
110
+
111
+
To make requests to the Azure AI model inference endpoint, append the route `models`, for example `https://<resource>.services.ai.azure.com/models`. You can see the API reference for the endpoint at [Azure AI model inference API reference page](https://aka.ms/azureai/modelinference).
112
+
113
+
__Inference keys__
114
+
115
+
```azurecli
116
+
az cognitiveservices account keys list -n $accountName -g $resourceGroupName
117
+
```
118
+
101
119
## Manage deployments
102
120
103
121
You can see all the deployments available using the CLI:
@@ -124,26 +142,5 @@ You can see all the deployments available using the CLI:
124
142
--deployment-name "Phi-3.5-vision-instruct" \
125
143
-n $accountName \
126
144
-g $resourceGroupName
127
-
```
128
-
129
-
## Use the model
130
-
131
-
Deployed models in Azure AI model inference can be consumed using the [Azure AI model's inference endpoint](../../concepts/endpoints.md) for the resource. When constructing your request, indicate the parameter `model` and insert the model deployment name you have created. You can programmatically get the URI for the inference endpoint using the following code:
132
-
133
-
__Inference endpoint__
134
-
135
-
```azurecli
136
-
az cognitiveservices account show -n $accountName -g $resourceGroupName | jq '.properties.endpoints["Azure AI Model Inference API"]'
137
-
```
138
-
139
-
To make requests to the Azure AI model inference endpoint, append the route `models`, for example `https://<resource>.services.ai.azure.com/models`. You can see the API reference for the endpoint at [Azure AI model inference API reference page](https://aka.ms/azureai/modelinference).
140
-
141
-
__Inference keys__
142
-
143
-
```azurecli
144
-
az cognitiveservices account keys list -n $accountName -g $resourceGroupName
0 commit comments