You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/openai/assistants-reference-messages.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -36,7 +36,7 @@ Create a message.
36
36
37
37
|Name | Type | Required | Description |
38
38
|--- |--- |--- |--- |
39
-
|`role`| string | Required | The role of the entity that is creating the message. Can be `user` or `assistant`. `assistant` indicates the message is sent by an actual user and should be used in most cases to represent user-generated messages. `assistant` indicates the message is generated by the assistant. Use this value to insert messages from the assistant into the conversation. |
39
+
|`role`| string | Required | The role of the entity that is creating the message. Can be `user` or `assistant`. `user` indicates the message is sent by an actual user and should be used in most cases to represent user-generated messages. `assistant` indicates the message is generated by the assistant. Use this value to insert messages from the assistant into the conversation. |
40
40
|`content`| string | Required | The content of the message. |
41
41
|`file_ids`| array | Optional | A list of File IDs that the message should use. There can be a maximum of 10 files attached to a message. Useful for tools like retrieval and code_interpreter that can access and use files. |
42
42
|`metadata`| map | Optional | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long. |
@@ -371,7 +371,7 @@ Represents a message within a thread.
371
371
|`object`| string |The object type, which is always thread.message.|
372
372
|`created_at`| integer |The Unix timestamp (in seconds) for when the message was created.|
373
373
|`thread_id`| string |The thread ID that this message belongs to.|
374
-
|`role`| string |The entity that produced the message. One of user or assistant.|
374
+
|`role`| string |The entity that produced the message. One of `user` or `assistant`.|
375
375
|`content`| array |The content of the message in array of text and/or images.|
376
376
|`assistant_id`| string or null |If applicable, the ID of the assistant that authored this message.|
377
377
|`run_id`| string or null |If applicable, the ID of the run associated with the authoring of this message.|
In this quickstart, we walk you through setting up your local development environment with the prompt flow SDK. We write a prompt, run it as part of your app code, trace the LLM calls being made, and run a basic evaluation on the outputs of the LLM.
@@ -139,7 +138,7 @@ Activating the Python environment means that when you run ```python``` or ```pip
139
138
140
139
## Install the prompt flow SDK
141
140
142
-
In this section, we use prompt flow to build our application. [Prompt flow](https://microsoft.github.io/promptflow/) is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, evaluation to production deployment and monitoring.
141
+
In this section, we use prompt flow to build our application. [Prompt flow](https://microsoft.github.io/promptflow) is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, evaluation to production deployment and monitoring.
143
142
144
143
Use pip to install the prompt flow SDK into the virtual environment that you created.
145
144
```
@@ -161,24 +160,24 @@ Your AI services endpoint and deployment name are required to call the Azure Ope
161
160
1. Create a ```.env``` file, and paste the following code:
162
161
```
163
162
AZURE_OPENAI_ENDPOINT=endpoint_value
164
-
AZURE_OPENAI_DEPLOYMENT_NAME=deployment_name
165
-
AZURE_OPENAI_API_VERSION=2024-02-15-preview
163
+
AZURE_OPENAI_CHAT_DEPLOYMENT=chat_deployment_name
164
+
AZURE_OPENAI_API_VERSION=api_version
166
165
```
167
166
168
167
1. Navigate to the [chat playground inside of your AI Studio project](./get-started-playground.md#chat-in-the-playground-without-your-data). First validate that chat is working with your model by sending a message to the LLM.
169
168
1. Find the Azure OpenAI deployment name in the chat playground. Select the deployment in the dropdown and hover over the deployment name to view it. In this example, the deployment name is **gpt-35-turbo-16k**.
170
169
171
170
:::image type="content" source="../media/quickstarts/promptflow-sdk/playground-deployment-view-code.png" alt-text="Screenshot of the AI Studio chat playground opened, highlighting the deployment name and the view code button." lightbox="../media/quickstarts/promptflow-sdk/playground-deployment-view-code.png":::
172
171
173
-
1. In the ```.env``` file, replace ```deployment_name``` with the name of the deployment from the previous step. In this example, we're using the deployment name ```gpt-35-turbo-16k```.
174
-
1. Select the **<\> View Code** button and copy the endpoint value.
172
+
1. In the ```.env``` file, replace ```chat_deployment_name``` with the name of the deployment from the previous step. In this example, we're using the deployment name ```gpt-35-turbo-16k```.
173
+
1. Select the **<\> View Code** button and copy the endpoint value and API version value.
175
174
176
175
:::image type="content" source="../media/quickstarts/promptflow-sdk/playground-copy-endpoint.png" alt-text="Screenshot of the view code popup highlighting the button to copy the endpoint value." lightbox="../media/quickstarts/promptflow-sdk/playground-copy-endpoint.png":::
177
176
178
-
1. In the ```.env``` file, replace ```endpoint_value``` with the endpoint value copied from the dialog in the previous step.
177
+
1. In the ```.env``` file, replace ```endpoint_value``` with the endpoint value and replace ```api_version``` with the API version copied from the dialog in the previous step (such as "2024-02-15-preview").
179
178
180
179
> [!WARNING]
181
-
> Key based authentication is supported but isn't recommended by Microsoft. If you want to use keys you can add your key to the ```.env```, but please ensure that your ```.env``` is in your ```.gitignore``` file so that you don't accidentally checked into your git repository.
180
+
> Key based authentication is supported but isn't recommended by Microsoft. If you want to use keys you can add your key to the ```.env```, but please ensure that your ```.env``` is in your ```.gitignore``` file so that you don't accidentally check it into your git repository.
182
181
183
182
## Create a basic chat prompt and app
184
183
@@ -231,7 +230,7 @@ load_dotenv()
231
230
from promptflow.core import Prompty, AzureOpenAIModelConfiguration
|**`*.hcp.<location>.azmk8s.io`**|**`HTTPS:443`**| Required for Node <-> API server communication. Replace *\<location\>* with the region where your AKS cluster is deployed. This is required for clusters with *konnectivity-agent* enabled. Konnectivity also uses Application-Layer Protocol Negotiation (ALPN) to communicate between agent and server. Blocking or rewriting the ALPN extension will cause a failure. This isn't required for [private clusters][private-clusters]. |
62
62
|**`mcr.microsoft.com`**|**`HTTPS:443`**| Required to access images in Microsoft Container Registry (MCR). This registry contains first-party images/charts (for example, coreDNS, etc.). These images are required for the correct creation and functioning of the cluster, including scale and upgrade operations. |
63
-
|**`*.data.mcr.microsoft.com`**|**`HTTPS:443`**| Required for MCR storage backed by the Azure content delivery network (CDN). |
63
+
|**`*.data.mcr.microsoft.com`**, **`mcr-0001.mcr-msedge.net`**|**`HTTPS:443`**| Required for MCR storage backed by the Azure content delivery network (CDN). |
64
64
|**`management.azure.com`**|**`HTTPS:443`**| Required for Kubernetes operations against the Azure API. |
65
65
|**`login.microsoftonline.com`**|**`HTTPS:443`**| Required for Microsoft Entra authentication. |
66
66
|**`packages.microsoft.com`**|**`HTTPS:443`**| This address is the Microsoft packages repository used for cached *apt-get* operations. Example packages include Moby, PowerShell, and Azure CLI. |
The parameter value is validated based on the discriminated property value. In the preceding example, if the *serviceConfig* parameter value is of type *foo*, it undergoes validation using the *FooConfig*type. Likewise, if the parameter value is of type *bar*, validation is performed using the *BarConfig* type, and this pattern continues for other types as well.
260
260
261
-
## Import types between Bicep files (Preview)
261
+
## Import types between Bicep files
262
262
263
263
[Bicep CLI version 0.21.X or higher](./install.md) is required to use this compile-time import feature. The experimental flag `compileTimeImports` must be enabled from the [Bicep config file](./bicep-config.md#enable-experimental-features).
Copy file name to clipboardExpand all lines: articles/firewall/integrate-lb.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,15 +5,15 @@ services: firewall
5
5
author: vhorne
6
6
ms.service: firewall
7
7
ms.topic: how-to
8
-
ms.date: 10/27/2022
8
+
ms.date: 06/14/2024
9
9
ms.author: victorh
10
10
---
11
11
12
12
# Integrate Azure Firewall with Azure Standard Load Balancer
13
13
14
14
You can integrate an Azure Firewall into a virtual network with an Azure Standard Load Balancer (either public or internal).
15
15
16
-
The preferred design is to integrate an internal load balancer with your Azure firewall, as this is a much simpler design. You can use a public load balancer if you already have one deployed and you want to keep it in place. However, you need to be aware of an asymmetric routing issue that can break functionality with the public load balancer scenario.
16
+
The preferred design is to integrate an internal load balancer with your Azure firewall, as this is a simpler design. You can use a public load balancer if you already have one deployed and you want to keep it in place. However, you need to be aware of an asymmetric routing issue that can break functionality with the public load balancer scenario.
17
17
18
18
For more information about Azure Load Balancer, see [What is Azure Load Balancer?](../load-balancer/load-balancer-overview.md)
19
19
@@ -30,7 +30,7 @@ Asymmetric routing is where a packet takes one path to the destination and takes
30
30
When you deploy an Azure Firewall into a subnet, one step is to create a default route for the subnet directing packets through the firewall's private IP address located on the AzureFirewallSubnet. For more information, see [Tutorial: Deploy and configure Azure Firewall using the Azure portal](tutorial-firewall-deploy-portal.md#create-a-default-route).
31
31
32
32
When you introduce the firewall into your load balancer scenario, you want your Internet traffic to come in through your firewall's public IP address. From there, the firewall applies its firewall rules and NATs the packets to your load balancer's public IP address. This is where the problem occurs. Packets arrive on the firewall's public IP address, but return to the firewall via the private IP address (using the default route).
33
-
To avoid this problem, create an additional host route for the firewall's public IP address. Packets going to the firewall's public IP address are routed via the Internet. This avoids taking the default route to the firewall's private IP address.
33
+
To avoid this problem, create another host route for the firewall's public IP address. Packets going to the firewall's public IP address are routed via the Internet. This avoids taking the default route to the firewall's private IP address.
34
34
35
35
:::image type="content" source="media/integrate-lb/Firewall-LB-asymmetric.png" alt-text="Diagram of asymmetric routing." lightbox="media/integrate-lb/Firewall-LB-asymmetric.png":::
36
36
### Route table example
@@ -58,7 +58,7 @@ So, you can deploy this scenario similar to the public load balancer scenario, b
58
58
The virtual machines in the backend pool can have outbound Internet connectivity through the Azure Firewall. Configure a user defined route on the virtual machine's subnet with the firewall as the next hop.
59
59
60
60
61
-
## Additional security
61
+
## Extra security
62
62
63
63
To further enhance the security of your load-balanced scenario, you can use network security groups (NSGs).
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-managed-network.md
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1111,6 +1111,7 @@ The Azure Machine Learning managed VNet feature is free. However, you're charged
1111
1111
* Using FQDN outbound rules increases the cost of the managed VNet because FQDN rules use Azure Firewall. For more information, see [Pricing](#pricing).
1112
1112
* FQDN outbound rules only support ports 80 and 443.
1113
1113
* If your compute instance is in a managed network and is configured for no public IP, use the `az ml compute connect-ssh` command to connect to it using SSH.
1114
+
* When using Managed Vnet, you can't deploy compute resources inside your custom Vnet. Compute resources can only be created inside the managed Vnet.
0 commit comments