Skip to content

Commit d69ffcb

Browse files
committed
review
1 parent 93c8b45 commit d69ffcb

File tree

19 files changed

+311
-202
lines changed

19 files changed

+311
-202
lines changed

articles/ai-foundry/model-inference/includes/use-chat-completions/csharp.md

Lines changed: 12 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ First, create the client to consume the model. The following code uses an endpoi
3737

3838
```csharp
3939
ChatCompletionsClient client = new ChatCompletionsClient(
40-
new Uri(Environment.GetEnvironmentVariable("AZURE_INFERENCE_ENDPOINT")),
40+
new Uri("https://<resource>.services.ai.azure.com/api/models"),
4141
new AzureKeyCredential(Environment.GetEnvironmentVariable("AZURE_INFERENCE_CREDENTIAL")),
4242
);
4343
```
@@ -46,16 +46,9 @@ If you've configured the resource with **Microsoft Entra ID** support, you can u
4646

4747

4848
```csharp
49-
TokenCredential credential = new DefaultAzureCredential(includeInteractiveCredentials: true);
50-
AzureAIInferenceClientOptions clientOptions = new AzureAIInferenceClientOptions();
51-
BearerTokenAuthenticationPolicy tokenPolicy = new BearerTokenAuthenticationPolicy(credential, new string[] { "https://cognitiveservices.azure.com/.default" });
52-
53-
clientOptions.AddPolicy(tokenPolicy, HttpPipelinePosition.PerRetry);
54-
5549
client = new ChatCompletionsClient(
56-
new Uri(Environment.GetEnvironmentVariable("AZURE_INFERENCE_ENDPOINT")),
57-
credential,
58-
clientOptions,
50+
new Uri("https://<resource>.services.ai.azure.com/api/models"),
51+
new DefaultAzureCredential(),
5952
);
6053
```
6154

@@ -77,7 +70,7 @@ Response<ChatCompletions> response = client.Complete(requestOptions);
7770
```
7871

7972
> [!NOTE]
80-
> Some models don't support system messages (`role="system"`). When you use the Foundry Models API, system messages are translated to user messages, which is the closest capability available. This translation is offered for convenience, but it's important for you to verify that the model is following the instructions in the system message with the right level of confidence.
73+
> Some models don't support system messages (`role="system"`). When you use the Azure AI model inference API, system messages are translated to user messages, which is the closest capability available. This translation is offered for convenience, but it's important for you to verify that the model is following the instructions in the system message with the right level of confidence.
8174
8275
The response is as follows, where you can see the model's usage statistics:
8376

@@ -158,7 +151,7 @@ StreamMessageAsync(client).GetAwaiter().GetResult();
158151

159152
#### Explore more parameters supported by the inference client
160153

161-
Explore other parameters that you can specify in the inference client. For a full list of all the supported parameters and their corresponding documentation, see [Foundry Models API reference](https://aka.ms/azureai/modelinference).
154+
Explore other parameters that you can specify in the inference client. For a full list of all the supported parameters and their corresponding documentation, see [Azure AI Model Inference API reference](https://aka.ms/azureai/modelinference).
162155

163156
```csharp
164157
requestOptions = new ChatCompletionsOptions()
@@ -212,7 +205,7 @@ Console.WriteLine($"Response: {response.Value.Content}");
212205

213206
### Pass extra parameters to the model
214207

215-
The Foundry Models API allows you to pass extra parameters to the model. The following code example shows how to pass the extra parameter `logprobs` to the model.
208+
The Azure AI Model Inference API allows you to pass extra parameters to the model. The following code example shows how to pass the extra parameter `logprobs` to the model.
216209

217210

218211
```csharp
@@ -230,11 +223,11 @@ response = client.Complete(requestOptions, extraParams: ExtraParameters.PassThro
230223
Console.WriteLine($"Response: {response.Value.Content}");
231224
```
232225

233-
Before you pass extra parameters to the Foundry Models API, make sure your model supports those extra parameters. When the request is made to the underlying model, the header `extra-parameters` is passed to the model with the value `pass-through`. This value tells the endpoint to pass the extra parameters to the model. Use of extra parameters with the model doesn't guarantee that the model can actually handle them. Read the model's documentation to understand which extra parameters are supported.
226+
Before you pass extra parameters to the Azure AI model inference API, make sure your model supports those extra parameters. When the request is made to the underlying model, the header `extra-parameters` is passed to the model with the value `pass-through`. This value tells the endpoint to pass the extra parameters to the model. Use of extra parameters with the model doesn't guarantee that the model can actually handle them. Read the model's documentation to understand which extra parameters are supported.
234227

235228
### Use tools
236229

237-
Some models support the use of tools, which can be an extraordinary resource when you need to offload specific tasks from the language model and instead rely on a more deterministic system or even a different language model. The Foundry Models API allows you to define tools in the following way.
230+
Some models support the use of tools, which can be an extraordinary resource when you need to offload specific tasks from the language model and instead rely on a more deterministic system or even a different language model. The Azure AI Model Inference API allows you to define tools in the following way.
238231

239232
The following code example creates a tool definition that is able to look from flight information from two different cities.
240233

@@ -361,11 +354,11 @@ View the response from the model:
361354
response = client.Complete(requestOptions);
362355
```
363356

364-
### Apply Guardrails and controls
357+
### Apply content safety
365358

366-
The Foundry Models API supports [Azure AI Content Safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI Content Safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
359+
The Azure AI model inference API supports [Azure AI content safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
367360

368-
The following example shows how to handle events when the model detects harmful content in the input prompt.
361+
The following example shows how to handle events when the model detects harmful content in the input prompt and content safety is enabled.
369362

370363

371364
```csharp
@@ -399,4 +392,4 @@ catch (RequestFailedException ex)
399392
```
400393

401394
> [!TIP]
402-
> To learn more about how you can configure and control Azure AI Content Safety settings, check the [Azure AI Content Safety documentation](https://aka.ms/azureaicontentsafety).
395+
> To learn more about how you can configure and control Azure AI content safety settings, check the [Azure AI content safety documentation](https://aka.ms/azureaicontentsafety).

articles/ai-foundry/model-inference/includes/use-chat-completions/java.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ First, create the client to consume the model. The following code uses an endpoi
3737
```java
3838
ChatCompletionsClient client = new ChatCompletionsClientBuilder()
3939
.credential(new AzureKeyCredential("{key}"))
40-
.endpoint("https://<resource>.services.ai.azure.com/models")
40+
.endpoint("https://<resource>.services.ai.azure.com/api/models")
4141
.buildClient();
4242
```
4343

@@ -47,7 +47,7 @@ If you've configured the resource with **Microsoft Entra ID** support, you can u
4747
TokenCredential defaultCredential = new DefaultAzureCredentialBuilder().build();
4848
ChatCompletionsClient client = new ChatCompletionsClientBuilder()
4949
.credential(defaultCredential)
50-
.endpoint("https://<resource>.services.ai.azure.com/models")
50+
.endpoint("https://<resource>.services.ai.azure.com/api/models")
5151
.buildClient();
5252
```
5353

@@ -65,7 +65,7 @@ ChatCompletions response = client.complete(new ChatCompletionsOptions(chatMessag
6565
```
6666

6767
> [!NOTE]
68-
> Some models don't support system messages (`role="system"`). When you use the Foundry Models API, system messages are translated to user messages, which is the closest capability available. This translation is offered for convenience, but it's important for you to verify that the model is following the instructions in the system message with the right level of confidence.
68+
> Some models don't support system messages (`role="system"`). When you use the Azure AI model inference API, system messages are translated to user messages, which is the closest capability available. This translation is offered for convenience, but it's important for you to verify that the model is following the instructions in the system message with the right level of confidence.
6969
7070
The response is as follows, where you can see the model's usage statistics:
7171

@@ -119,7 +119,7 @@ client.completeStream(new ChatCompletionsOptions(chatMessages))
119119

120120
#### Explore more parameters supported by the inference client
121121

122-
Explore other parameters that you can specify in the inference client. For a full list of all the supported parameters and their corresponding documentation, see [Foundry Models API reference](https://aka.ms/azureai/modelinference).
122+
Explore other parameters that you can specify in the inference client. For a full list of all the supported parameters and their corresponding documentation, see [Azure AI Model Inference API reference](https://aka.ms/azureai/modelinference).
123123
Some models don't support JSON output formatting. You can always prompt the model to generate JSON outputs. However, such outputs aren't guaranteed to be valid JSON.
124124

125125
If you want to pass a parameter that isn't in the list of supported parameters, you can pass it to the underlying model using *extra parameters*. See [Pass extra parameters to the model](#pass-extra-parameters-to-the-model).
@@ -130,13 +130,13 @@ Some models can create JSON outputs. Set `response_format` to `json_object` to e
130130

131131
### Pass extra parameters to the model
132132

133-
The Foundry Models API allows you to pass extra parameters to the model. The following code example shows how to pass the extra parameter `logprobs` to the model.
133+
The Azure AI Model Inference API allows you to pass extra parameters to the model. The following code example shows how to pass the extra parameter `logprobs` to the model.
134134

135-
Before you pass extra parameters to the Foundry Models API, make sure your model supports those extra parameters. When the request is made to the underlying model, the header `extra-parameters` is passed to the model with the value `pass-through`. This value tells the endpoint to pass the extra parameters to the model. Use of extra parameters with the model doesn't guarantee that the model can actually handle them. Read the model's documentation to understand which extra parameters are supported.
135+
Before you pass extra parameters to the Azure AI model inference API, make sure your model supports those extra parameters. When the request is made to the underlying model, the header `extra-parameters` is passed to the model with the value `pass-through`. This value tells the endpoint to pass the extra parameters to the model. Use of extra parameters with the model doesn't guarantee that the model can actually handle them. Read the model's documentation to understand which extra parameters are supported.
136136

137137
### Use tools
138138

139-
Some models support the use of tools, which can be an extraordinary resource when you need to offload specific tasks from the language model and instead rely on a more deterministic system or even a different language model. The Foundry Models API allows you to define tools in the following way.
139+
Some models support the use of tools, which can be an extraordinary resource when you need to offload specific tasks from the language model and instead rely on a more deterministic system or even a different language model. The Azure AI Model Inference API allows you to define tools in the following way.
140140

141141
The following code example creates a tool definition that is able to look from flight information from two different cities.
142142

@@ -155,11 +155,11 @@ Now, it's time to call the appropriate function to handle the tool call. The fol
155155

156156
View the response from the model:
157157

158-
### Apply Guardrails and controls
158+
### Apply content safety
159159

160-
The Foundry Models API supports [Azure AI Content Safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI Content Safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
160+
The Azure AI model inference API supports [Azure AI content safety](https://aka.ms/azureaicontentsafety). When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
161161

162-
The following example shows how to handle events when the model detects harmful content in the input prompt.
162+
The following example shows how to handle events when the model detects harmful content in the input prompt and content safety is enabled.
163163

164164
> [!TIP]
165-
> To learn more about how you can configure and control Azure AI Content Safety settings, check the [Azure AI Content Safety documentation](https://aka.ms/azureaicontentsafety).
165+
> To learn more about how you can configure and control Azure AI content safety settings, check the [Azure AI content safety documentation](https://aka.ms/azureaicontentsafety).

0 commit comments

Comments
 (0)