Skip to content

Commit 7ef935b

Browse files
committed
Merge branch 'main' of https://github.com/microsoftdocs/azure-docs-pr into dhsm-emails
2 parents a6a3702 + 464f64d commit 7ef935b

File tree

7 files changed

+22
-22
lines changed

7 files changed

+22
-22
lines changed

articles/ai-services/openai/assistants-reference-messages.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ Create a message.
3636

3737
|Name | Type | Required | Description |
3838
|--- |--- |--- |--- |
39-
| `role` | string | Required | The role of the entity that is creating the message. Can be `user` or `assistant`. `assistant` indicates the message is sent by an actual user and should be used in most cases to represent user-generated messages. `assistant` indicates the message is generated by the assistant. Use this value to insert messages from the assistant into the conversation. |
39+
| `role` | string | Required | The role of the entity that is creating the message. Can be `user` or `assistant`. `user` indicates the message is sent by an actual user and should be used in most cases to represent user-generated messages. `assistant` indicates the message is generated by the assistant. Use this value to insert messages from the assistant into the conversation. |
4040
| `content` | string | Required | The content of the message. |
4141
| `file_ids` | array | Optional | A list of File IDs that the message should use. There can be a maximum of 10 files attached to a message. Useful for tools like retrieval and code_interpreter that can access and use files. |
4242
| `metadata` | map | Optional | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long. |
@@ -371,7 +371,7 @@ Represents a message within a thread.
371371
| `object` | string |The object type, which is always thread.message.|
372372
| `created_at` | integer |The Unix timestamp (in seconds) for when the message was created.|
373373
| `thread_id` | string |The thread ID that this message belongs to.|
374-
| `role` | string |The entity that produced the message. One of user or assistant.|
374+
| `role` | string |The entity that produced the message. One of `user` or `assistant`.|
375375
| `content` | array |The content of the message in array of text and/or images.|
376376
| `assistant_id` | string or null |If applicable, the ID of the assistant that authored this message.|
377377
| `run_id` | string or null |If applicable, the ID of the run associated with the authoring of this message.|

articles/ai-studio/quickstarts/get-started-code.md

Lines changed: 10 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,6 @@ author: eric-urban
1313
---
1414

1515
# Build a custom chat app in Python using the prompt flow SDK
16-
1716
[!INCLUDE [Feature preview](~/reusable-content/ce-skilling/azure/includes/ai-studio/includes/feature-preview.md)]
1817

1918
In this quickstart, we walk you through setting up your local development environment with the prompt flow SDK. We write a prompt, run it as part of your app code, trace the LLM calls being made, and run a basic evaluation on the outputs of the LLM.
@@ -139,7 +138,7 @@ Activating the Python environment means that when you run ```python``` or ```pip
139138
140139
## Install the prompt flow SDK
141140

142-
In this section, we use prompt flow to build our application. [Prompt flow](https://microsoft.github.io/promptflow/) is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, evaluation to production deployment and monitoring.
141+
In this section, we use prompt flow to build our application. [Prompt flow](https://microsoft.github.io/promptflow) is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, evaluation to production deployment and monitoring.
143142

144143
Use pip to install the prompt flow SDK into the virtual environment that you created.
145144
```
@@ -161,24 +160,24 @@ Your AI services endpoint and deployment name are required to call the Azure Ope
161160
1. Create a ```.env``` file, and paste the following code:
162161
```
163162
AZURE_OPENAI_ENDPOINT=endpoint_value
164-
AZURE_OPENAI_DEPLOYMENT_NAME=deployment_name
165-
AZURE_OPENAI_API_VERSION=2024-02-15-preview
163+
AZURE_OPENAI_CHAT_DEPLOYMENT=chat_deployment_name
164+
AZURE_OPENAI_API_VERSION=api_version
166165
```
167166
168167
1. Navigate to the [chat playground inside of your AI Studio project](./get-started-playground.md#chat-in-the-playground-without-your-data). First validate that chat is working with your model by sending a message to the LLM.
169168
1. Find the Azure OpenAI deployment name in the chat playground. Select the deployment in the dropdown and hover over the deployment name to view it. In this example, the deployment name is **gpt-35-turbo-16k**.
170169
171170
:::image type="content" source="../media/quickstarts/promptflow-sdk/playground-deployment-view-code.png" alt-text="Screenshot of the AI Studio chat playground opened, highlighting the deployment name and the view code button." lightbox="../media/quickstarts/promptflow-sdk/playground-deployment-view-code.png":::
172171
173-
1. In the ```.env``` file, replace ```deployment_name``` with the name of the deployment from the previous step. In this example, we're using the deployment name ```gpt-35-turbo-16k```.
174-
1. Select the **<\> View Code** button and copy the endpoint value.
172+
1. In the ```.env``` file, replace ```chat_deployment_name``` with the name of the deployment from the previous step. In this example, we're using the deployment name ```gpt-35-turbo-16k```.
173+
1. Select the **<\> View Code** button and copy the endpoint value and API version value.
175174
176175
:::image type="content" source="../media/quickstarts/promptflow-sdk/playground-copy-endpoint.png" alt-text="Screenshot of the view code popup highlighting the button to copy the endpoint value." lightbox="../media/quickstarts/promptflow-sdk/playground-copy-endpoint.png":::
177176
178-
1. In the ```.env``` file, replace ```endpoint_value``` with the endpoint value copied from the dialog in the previous step.
177+
1. In the ```.env``` file, replace ```endpoint_value``` with the endpoint value and replace ```api_version``` with the API version copied from the dialog in the previous step (such as "2024-02-15-preview").
179178
180179
> [!WARNING]
181-
> Key based authentication is supported but isn't recommended by Microsoft. If you want to use keys you can add your key to the ```.env```, but please ensure that your ```.env``` is in your ```.gitignore``` file so that you don't accidentally checked into your git repository.
180+
> Key based authentication is supported but isn't recommended by Microsoft. If you want to use keys you can add your key to the ```.env```, but please ensure that your ```.env``` is in your ```.gitignore``` file so that you don't accidentally check it into your git repository.
182181
183182
## Create a basic chat prompt and app
184183
@@ -231,7 +230,7 @@ load_dotenv()
231230
from promptflow.core import Prompty, AzureOpenAIModelConfiguration
232231

233232
model_config = AzureOpenAIModelConfiguration(
234-
azure_deployment=os.getenv("AZURE_OPENAI_DEPLOYMENT_NAME"),
233+
azure_deployment=os.getenv("AZURE_OPENAI_CHAT_DEPLOYMENT"),
235234
api_version=os.getenv("AZURE_OPENAI_API_VERSION"),
236235
azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT")
237236
)
@@ -307,7 +306,7 @@ from promptflow.core import Prompty, AzureOpenAIModelConfiguration
307306
from promptflow.evals.evaluators import ChatEvaluator
308307

309308
model_config = AzureOpenAIModelConfiguration(
310-
azure_deployment=os.getenv("AZURE_OPENAI_DEPLOYMENT_NAME"),
309+
azure_deployment=os.getenv("AZURE_OPENAI_CHAT_DEPLOYMENT"),
311310
api_version=os.getenv("AZURE_OPENAI_API_VERSION"),
312311
azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT")
313312
)
@@ -352,4 +351,4 @@ For more information on how to use prompt flow evaluators, including how to make
352351

353352
- [Quickstart: Create a project and use the chat playground in Azure AI Studio](./get-started-playground.md)
354353
- [Work with projects in VS Code](../how-to/develop/vscode.md)
355-
- [Overview of the Azure AI SDKs](../how-to/develop/sdk-overview.md)
354+
- [Overview of the Azure AI SDKs](../how-to/develop/sdk-overview.md)

articles/aks/outbound-rules-control-egress.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ The following network and FQDN/application rules are required for an AKS cluster
6060
|----------------------------------|-----------------|----------|
6161
| **`*.hcp.<location>.azmk8s.io`** | **`HTTPS:443`** | Required for Node <-> API server communication. Replace *\<location\>* with the region where your AKS cluster is deployed. This is required for clusters with *konnectivity-agent* enabled. Konnectivity also uses Application-Layer Protocol Negotiation (ALPN) to communicate between agent and server. Blocking or rewriting the ALPN extension will cause a failure. This isn't required for [private clusters][private-clusters]. |
6262
| **`mcr.microsoft.com`** | **`HTTPS:443`** | Required to access images in Microsoft Container Registry (MCR). This registry contains first-party images/charts (for example, coreDNS, etc.). These images are required for the correct creation and functioning of the cluster, including scale and upgrade operations. |
63-
| **`*.data.mcr.microsoft.com`** | **`HTTPS:443`** | Required for MCR storage backed by the Azure content delivery network (CDN). |
63+
| **`*.data.mcr.microsoft.com`**, **`mcr-0001.mcr-msedge.net`** | **`HTTPS:443`** | Required for MCR storage backed by the Azure content delivery network (CDN). |
6464
| **`management.azure.com`** | **`HTTPS:443`** | Required for Kubernetes operations against the Azure API. |
6565
| **`login.microsoftonline.com`** | **`HTTPS:443`** | Required for Microsoft Entra authentication. |
6666
| **`packages.microsoft.com`** | **`HTTPS:443`** | This address is the Microsoft packages repository used for cached *apt-get* operations. Example packages include Moby, PowerShell, and Azure CLI. |

articles/azure-resource-manager/bicep/user-defined-data-types.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: User-defined types in Bicep
33
description: Describes how to define and use user-defined data types in Bicep.
44
ms.topic: conceptual
55
ms.custom: devx-track-bicep
6-
ms.date: 05/22/2024
6+
ms.date: 06/14/2024
77
---
88

99
# User-defined data types in Bicep
@@ -258,7 +258,7 @@ output config object = serviceConfig
258258

259259
The parameter value is validated based on the discriminated property value. In the preceding example, if the *serviceConfig* parameter value is of type *foo*, it undergoes validation using the *FooConfig*type. Likewise, if the parameter value is of type *bar*, validation is performed using the *BarConfig* type, and this pattern continues for other types as well.
260260

261-
## Import types between Bicep files (Preview)
261+
## Import types between Bicep files
262262

263263
[Bicep CLI version 0.21.X or higher](./install.md) is required to use this compile-time import feature. The experimental flag `compileTimeImports` must be enabled from the [Bicep config file](./bicep-config.md#enable-experimental-features).
264264

articles/azure-resource-manager/management/tag-support.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -418,7 +418,7 @@ To get the same data as a file of comma-separated values, download [tag-support.
418418
> | billingAccounts / appliedReservationOrders | No | No |
419419
> | billingAccounts / associatedTenants | No | No |
420420
> | billingAccounts / billingPermissions | No | No |
421-
> | billingAccounts / billingProfiles | No | No |
421+
> | billingAccounts / billingProfiles | Yes | Yes |
422422
> | billingAccounts / billingProfiles / billingPermissions | No | No |
423423
> | billingAccounts / billingProfiles / billingRoleAssignments | No | No |
424424
> | billingAccounts / billingProfiles / billingRoleDefinitions | No | No |
@@ -429,7 +429,7 @@ To get the same data as a file of comma-separated values, download [tag-support.
429429
> | billingAccounts / billingProfiles / invoices | No | No |
430430
> | billingAccounts / billingProfiles / invoices / pricesheet | No | No |
431431
> | billingAccounts / billingProfiles / invoices / transactions | No | No |
432-
> | billingAccounts / billingProfiles / invoiceSections | No | No |
432+
> | billingAccounts / billingProfiles / invoiceSections | Yes | Yes |
433433
> | billingAccounts / billingProfiles / invoiceSections / billingPermissions | No | No |
434434
> | billingAccounts / billingProfiles / invoiceSections / billingRoleAssignments | No | No |
435435
> | billingAccounts / billingProfiles / invoiceSections / billingRoleDefinitions | No | No |

articles/firewall/integrate-lb.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -5,15 +5,15 @@ services: firewall
55
author: vhorne
66
ms.service: firewall
77
ms.topic: how-to
8-
ms.date: 10/27/2022
8+
ms.date: 06/14/2024
99
ms.author: victorh
1010
---
1111

1212
# Integrate Azure Firewall with Azure Standard Load Balancer
1313

1414
You can integrate an Azure Firewall into a virtual network with an Azure Standard Load Balancer (either public or internal).
1515

16-
The preferred design is to integrate an internal load balancer with your Azure firewall, as this is a much simpler design. You can use a public load balancer if you already have one deployed and you want to keep it in place. However, you need to be aware of an asymmetric routing issue that can break functionality with the public load balancer scenario.
16+
The preferred design is to integrate an internal load balancer with your Azure firewall, as this is a simpler design. You can use a public load balancer if you already have one deployed and you want to keep it in place. However, you need to be aware of an asymmetric routing issue that can break functionality with the public load balancer scenario.
1717

1818
For more information about Azure Load Balancer, see [What is Azure Load Balancer?](../load-balancer/load-balancer-overview.md)
1919

@@ -30,7 +30,7 @@ Asymmetric routing is where a packet takes one path to the destination and takes
3030
When you deploy an Azure Firewall into a subnet, one step is to create a default route for the subnet directing packets through the firewall's private IP address located on the AzureFirewallSubnet. For more information, see [Tutorial: Deploy and configure Azure Firewall using the Azure portal](tutorial-firewall-deploy-portal.md#create-a-default-route).
3131

3232
When you introduce the firewall into your load balancer scenario, you want your Internet traffic to come in through your firewall's public IP address. From there, the firewall applies its firewall rules and NATs the packets to your load balancer's public IP address. This is where the problem occurs. Packets arrive on the firewall's public IP address, but return to the firewall via the private IP address (using the default route).
33-
To avoid this problem, create an additional host route for the firewall's public IP address. Packets going to the firewall's public IP address are routed via the Internet. This avoids taking the default route to the firewall's private IP address.
33+
To avoid this problem, create another host route for the firewall's public IP address. Packets going to the firewall's public IP address are routed via the Internet. This avoids taking the default route to the firewall's private IP address.
3434

3535
:::image type="content" source="media/integrate-lb/Firewall-LB-asymmetric.png" alt-text="Diagram of asymmetric routing." lightbox="media/integrate-lb/Firewall-LB-asymmetric.png":::
3636
### Route table example
@@ -58,7 +58,7 @@ So, you can deploy this scenario similar to the public load balancer scenario, b
5858
The virtual machines in the backend pool can have outbound Internet connectivity through the Azure Firewall. Configure a user defined route on the virtual machine's subnet with the firewall as the next hop.
5959

6060

61-
## Additional security
61+
## Extra security
6262

6363
To further enhance the security of your load-balanced scenario, you can use network security groups (NSGs).
6464

articles/machine-learning/how-to-managed-network.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1111,6 +1111,7 @@ The Azure Machine Learning managed VNet feature is free. However, you're charged
11111111
* Using FQDN outbound rules increases the cost of the managed VNet because FQDN rules use Azure Firewall. For more information, see [Pricing](#pricing).
11121112
* FQDN outbound rules only support ports 80 and 443.
11131113
* If your compute instance is in a managed network and is configured for no public IP, use the `az ml compute connect-ssh` command to connect to it using SSH.
1114+
* When using Managed Vnet, you can't deploy compute resources inside your custom Vnet. Compute resources can only be created inside the managed Vnet.
11141115

11151116
### Migration of compute resources
11161117

0 commit comments

Comments
 (0)