Skip to content

Commit b05e10d

Browse files
Learn Build Service GitHub AppLearn Build Service GitHub App
authored andcommitted
Merging changes synced from https://github.com/MicrosoftDocs/azure-docs-pr (branch live)
2 parents 56853c4 + ba047bf commit b05e10d

File tree

46 files changed

+512
-766
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

46 files changed

+512
-766
lines changed

articles/ai-studio/how-to/prompt-flow-tools/llm-tool.md

Lines changed: 8 additions & 176 deletions
Original file line numberDiff line numberDiff line change
@@ -17,51 +17,14 @@ author: lgayhardt
1717

1818
[!INCLUDE [Azure AI Studio preview](../../includes/preview-ai-studio.md)]
1919

20-
The large language model (LLM) tool in prompt flow enables you to take advantage of widely used large language models like [OpenAI](https://platform.openai.com/), [Azure OpenAI Service](../../../ai-services/openai/overview.md), and models in [Azure AI Studio model catalog](../model-catalog.md) for natural language processing.
21-
> [!NOTE]
22-
> The previous version of the LLM tool is now being deprecated. Please upgrade to latest [promptflow-tools](https://pypi.org/project/promptflow-tools/) package to consume new llm tools.
23-
24-
Prompt flow provides a few different large language model APIs:
25-
26-
- [Completion](https://platform.openai.com/docs/api-reference/completions): OpenAI's completion models generate text based on provided prompts.
27-
- [Chat](https://platform.openai.com/docs/api-reference/chat): OpenAI's chat models facilitate interactive conversations with text-based inputs and responses.
20+
To use large language models (LLMs) for natural language processing, you use the prompt flow LLM tool.
2821

2922
> [!NOTE]
30-
> Don't use non-ascii characters in resource group name of Azure OpenAI resource, prompt flow didn't support this case.
23+
> For embeddings to convert text into dense vector representations for various natural language processing tasks, see [Embedding tool](embedding-tool.md).
3124
3225
## Prerequisites
3326

34-
Create OpenAI resources, Azure OpenAI resources, or MaaS deployment with the LLM models (for example: llama2, mistral, cohere etc.) in Azure AI Studio model catalog:
35-
36-
- **OpenAI**:
37-
38-
- Sign up your account on the [OpenAI website](https://openai.com/).
39-
- Sign in and [find your personal API key](https://platform.openai.com/account/api-keys).
40-
41-
- **Azure OpenAI**:
42-
43-
- [Create Azure OpenAI resources](../../../ai-services/openai/how-to/create-resource.md).
44-
45-
- **MaaS deployment**:
46-
47-
[Create MaaS deployment for models in Azure AI Studio model catalog](../../concepts/deployments-overview.md#deploy-models-with-model-as-a-service).
48-
49-
You can create serverless connection to use this MaaS deployment.
50-
51-
## Connections
52-
53-
Set up connections to provisioned resources in prompt flow.
54-
55-
| Type | Name | API key | API base | API type | API version |
56-
|-------------|----------|----------|----------|----------|-------------|
57-
| OpenAI | Required | Required | - | - | - |
58-
| Azure OpenAI| Required | Required | Required | Required | Required |
59-
| Serverless | Required | Required | Required | - | - |
60-
61-
> [!TIP]
62-
> - To use Microsoft Entra ID auth type for Azure OpenAI connection, you need assign either the `Cognitive Services OpenAI User` or `Cognitive Services OpenAI Contributor role` to user or user assigned managed identity.
63-
> - Learn more about [how to specify to use user identity to submit flow run](../create-manage-runtime.md#create-an-automatic-runtime-on-a-flow-page).
64-
> - Learn more about [How to configure Azure OpenAI Service with managed identities](../../../ai-services/openai/how-to/managed-identity.md).
27+
Prepare a prompt as described in the [Prompt tool](prompt-tool.md#prerequisites) documentation. The LLM tool and Prompt tool both support [Jinja](https://jinja.palletsprojects.com/en/3.1.x/) templates. For more information and best practices, see [Prompt engineering techniques](../../../ai-services/openai/concepts/advanced-prompt-engineering.md).
6528

6629
## Build with the LLM tool
6730

@@ -72,7 +35,7 @@ Set up connections to provisioned resources in prompt flow.
7235

7336
1. Select the connection to one of your provisioned resources. For example, select **Default_AzureOpenAI**.
7437
1. From the **Api** dropdown list, select **chat** or **completion**.
75-
1. Enter values for the LLM tool input parameters described in the [Text completion inputs table](#inputs). If you selected the **chat** API, see the [Chat inputs table](#chat-inputs). If you selected the **completion** API, see the [Text completion inputs table](#text-completion-inputs). For information about how to prepare the prompt input, see [How to write a prompt](#how-to-write-a-prompt).
38+
1. Enter values for the LLM tool input parameters described in the [Text completion inputs table](#inputs). If you selected the **chat** API, see the [Chat inputs table](#chat-inputs). If you selected the **completion** API, see the [Text completion inputs table](#text-completion-inputs). For information about how to prepare the prompt input, see [Prerequisites](#prerequisites).
7639
1. Add more tools to your flow, as needed. Or select **Run** to run the flow.
7740
1. The outputs are described in the [Outputs table](#outputs).
7841

@@ -111,146 +74,15 @@ The following input parameters are available.
11174
| presence\_penalty | float | The value that controls the model's behavior regarding repeating phrases. Default is 0. | No |
11275
| frequency\_penalty | float | The value that controls the model's behavior regarding generating rare phrases. Default is 0. | No |
11376
| logit\_bias | dictionary | The logit bias for the language model. Default is empty dictionary. | No |
114-
| tool\_choice | object | Value that controls which tool is called by the model. Default is null. | No |
115-
| tools | list | A list of tools the model may generate JSON inputs for. Default is null. | No |
116-
| response_format | object | An object specifying the format that the model must output. Default is null. | No |
11777

11878
## Outputs
11979

12080
The output varies depending on the API you selected for inputs.
12181

122-
| Return type | Description |
123-
|-------------|------------------------------------------|
124-
| string | Text of one predicted completion or response of conversation |
125-
126-
## How to write a prompt?
127-
128-
Prepare a prompt as described in the [Prompt tool](prompt-tool.md#prerequisites) documentation. The LLM tool and Prompt tool both support [Jinja](https://jinja.palletsprojects.com/en/3.1.x/) templates. For more information and best practices, see [Prompt engineering techniques](../../../ai-services/openai/concepts/advanced-prompt-engineering.md).
129-
130-
For example, for a chat prompt we offer a method to distinguish between different roles in a chat prompt, such as "system", "user", "assistant" and "tool". The "system", "user", "assistant" roles can have "name" and "content" properties. The "tool" role, however, should have "tool_call_id" and "content" properties. For an example of a tool chat prompt, please refer to [Sample 3](#sample-3).
131-
132-
### Sample 1
133-
```jinja
134-
# system:
135-
You are a helpful assistant.
136-
137-
{% for item in chat_history %}
138-
# user:
139-
{{item.inputs.question}}
140-
# assistant:
141-
{{item.outputs.answer}}
142-
{% endfor %}
143-
144-
# user:
145-
{{question}}
146-
```
147-
148-
In LLM tool, the prompt is transformed to match the [OpenAI messages](https://platform.openai.com/docs/api-reference/chat/create#chat-create-messages) structure before sending to OpenAI chat API.
149-
150-
```
151-
[
152-
{
153-
"role": "system",
154-
"content": "You are a helpful assistant."
155-
},
156-
{
157-
"role": "user",
158-
"content": "<question-of-chat-history-round-1>"
159-
},
160-
{
161-
"role": "assistant",
162-
"content": "<answer-of-chat-history-round-1>"
163-
},
164-
...
165-
{
166-
"role": "user",
167-
"content": "<question>"
168-
}
169-
]
170-
```
171-
172-
### Sample 2
173-
```jinja
174-
# system:
175-
{# For role naming customization, the following syntax is used #}
176-
## name:
177-
Alice
178-
## content:
179-
You are a bot can tell good jokes.
180-
```
181-
182-
In LLM tool, the prompt is transformed to match the [OpenAI messages](https://platform.openai.com/docs/api-reference/chat/create#chat-create-messages) structure before sending to OpenAI chat API.
183-
184-
```
185-
[
186-
{
187-
"role": "system",
188-
"name": "Alice",
189-
"content": "You are a bot can tell good jokes."
190-
}
191-
]
192-
```
193-
194-
### Sample 3
195-
This sample illustrates how to write a tool chat prompt.
196-
```jinja
197-
# system:
198-
You are a helpful assistant.
199-
# user:
200-
What is the current weather like in Boston?
201-
# assistant:
202-
{# The assistant message with 'tool_calls' must be followed by messages with role 'tool'. #}
203-
## tool_calls:
204-
{{llm_output.tool_calls}}
205-
# tool:
206-
{#
207-
Messages with role 'tool' must be a response to a preceding message with 'tool_calls'.
208-
Additionally, 'tool_call_id's should match ids of assistant message 'tool_calls'.
209-
#}
210-
## tool_call_id:
211-
{{llm_output.tool_calls[0].id}}
212-
## content:
213-
{{tool-answer-of-last-question}}
214-
# user:
215-
{{question}}
216-
```
217-
218-
In LLM tool, the prompt is transformed to match the [OpenAI messages](https://platform.openai.com/docs/api-reference/chat/create#chat-create-messages) structure before sending to OpenAI chat API.
219-
220-
```
221-
[
222-
{
223-
"role": "system",
224-
"content": "You are a helpful assistant."
225-
},
226-
{
227-
"role": "user",
228-
"content": "What is the current weather like in Boston?"
229-
},
230-
{
231-
"role": "assistant",
232-
"content": null,
233-
"function_call": null,
234-
"tool_calls": [
235-
{
236-
"id": "<tool-call-id-of-last-question>",
237-
"type": "function",
238-
"function": "<function-to-call-of-last-question>"
239-
}
240-
]
241-
},
242-
{
243-
"role": "tool",
244-
"tool_call_id": "<tool-call-id-of-last-question>",
245-
"content": "<tool-answer-of-last-question>"
246-
}
247-
...
248-
{
249-
"role": "user",
250-
"content": "<question>"
251-
}
252-
]
253-
```
82+
| API | Return type | Description |
83+
|------------|-------------|------------------------------------------|
84+
| Completion | string | The text of one predicted completion. |
85+
| Chat | string | The text of one response of conversation. |
25486

25587
## Next steps
25688

articles/ai-studio/how-to/prompt-flow-tools/prompt-flow-tools-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ The following table provides an index of tools in prompt flow.
1919

2020
| Tool name | Description | Package name |
2121
|------|-----------|-------------|--------------|
22-
| [LLM](./llm-tool.md) | Use large language models (LLM) for tasks such as text completion or chat. | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
22+
| [LLM](./llm-tool.md) | Use large language models (LLM) with the Azure OpenAI Service for tasks such as text completion or chat. | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
2323
| [Prompt](./prompt-tool.md) | Craft a prompt by using Jinja as the templating language. | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
2424
| [Python](./python-tool.md) | Run Python code. | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
2525
| [Azure OpenAI GPT-4 Turbo with Vision](./azure-open-ai-gpt-4v-tool.md) | Use an Azure OpenAI GPT-4 Turbo with Vision model deployment to analyze images and provide textual responses to questions about them. | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |

articles/api-management/breaking-changes/api-version-retirement-sep-2023.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ We also recommend setting the **Minimum API version** in your API Management ins
4747
* Terraform azurerm provider: 3.0.0
4848

4949
* **Azure SDKs** - Update the Azure API Management SDKs to the latest versions or at least the following versions:
50-
* .NET: 8.0.0
50+
* .NET: v1.1.0
5151
* Go: 1.0.0
5252
* Python: 3.0.0
5353
- JavaScript: 8.0.1
@@ -79,4 +79,4 @@ To set the **Minimum API version** in the portal:
7979

8080
## Related content
8181

82-
See all [upcoming breaking changes and feature retirements](overview.md).
82+
See all [upcoming breaking changes and feature retirements](overview.md).

articles/azure-app-configuration/quickstart-bicep.md

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,9 @@ If you don't have an Azure subscription, create a [free account](https://azure.m
2828

2929
Managing an Azure App Configuration resource with Bicep file requires an Azure Resource Manager role, such as contributor or owner. Accessing Azure App Configuration data (key-values, snapshots) requires an Azure Resource Manager role and an additional Azure App Configuration [data plane role](concept-enable-rbac.md) when the configuration store's ARM authentication mode is set to [pass-through](./quickstart-deployment-overview.md#azure-resource-manager-authentication-mode) ARM authentication mode.
3030

31+
> [!IMPORTANT]
32+
> Configuring ARM authentication mode requires App Configuration control plane API version `2023-08-01-preview` or later.
33+
3134
## Review the Bicep file
3235

3336
The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/app-configuration-store-kv/).
@@ -39,8 +42,8 @@ The Bicep file used in this quickstart is from [Azure Quickstart Templates](http
3942

4043
Two Azure resources are defined in the Bicep file:
4144

42-
- [Microsoft.AppConfiguration/configurationStores](/azure/templates/microsoft.appconfiguration/2020-07-01-preview/configurationstores): create an App Configuration store.
43-
- [Microsoft.AppConfiguration/configurationStores/keyValues](/azure/templates/microsoft.appconfiguration/2020-07-01-preview/configurationstores/keyvalues): create a key-value inside the App Configuration store.
45+
- [Microsoft.AppConfiguration/configurationStores](/azure/templates/microsoft.appconfiguration/configurationstores): create an App Configuration store.
46+
- [Microsoft.AppConfiguration/configurationStores/keyValues](/azure/templates/microsoft.appconfiguration/configurationstores/keyvalues): create a key-value inside the App Configuration store.
4447

4548
With this Bicep file, we create one key with two different values, one of which has a unique label.
4649

articles/azure-app-configuration/quickstart-deployment-overview.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -33,6 +33,9 @@ To learn more about Azure RBAC and Microsoft Entra ID, see [Authorize access to
3333

3434
Azure App Configuration data, such as key-values and snapshots, can be managed in deployment. When managing App Configuration data using this method, it's recommended to set your configuration store's Azure Resource Manager authentication mode to **Pass-through**. This authentication mode ensures that data access requires a combination of data plane and Azure Resource Manager management roles and ensuring that data access can be properly attributed to the deployment caller for auditing purpose.
3535

36+
> [!IMPORTANT]
37+
> App Configuration control plane API version `2023-08-01-preview` or later is required to configure **Azure Resource Manager Authentication Mode** using [ARM template](./quickstart-resource-manager.md), [Bicep](./quickstart-bicep.md), or REST API. See the [REST API examples](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/appconfiguration/resource-manager/Microsoft.AppConfiguration/preview/2023-08-01-preview/examples/ConfigurationStoresCreateWithDataPlaneProxy.json).
38+
3639
### Azure Resource Manager authentication mode
3740

3841
# [Azure portal](#tab/portal)

articles/azure-app-configuration/quickstart-resource-manager.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -35,21 +35,24 @@ If you don't have an Azure subscription, create a [free account](https://azure.m
3535

3636
Managing Azure App Configuration resource inside an ARM template requires Azure Resource Manager role, such as contributor or owner. Accessing Azure App Configuration data (key-values, snapshots) requires Azure Resource Manager role and Azure App Configuration [data plane role](concept-enable-rbac.md) under [pass-through](./quickstart-deployment-overview.md#azure-resource-manager-authentication-mode) ARM authentication mode.
3737

38+
> [!IMPORTANT]
39+
> Configuring ARM authentication mode requires App Configuration control plane API version `2023-08-01-preview` or later.
40+
3841
## Review the template
3942

4043
The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/app-configuration-store-kv/). It creates a new App Configuration store with two key-values inside. It then uses the `reference` function to output the values of the two key-value resources. Reading the key's value in this way allows it to be used in other places in the template.
4144

4245
The quickstart uses the `copy` element to create multiple instances of key-value resource. To learn more about the `copy` element, see [Resource iteration in ARM templates](../azure-resource-manager/templates/copy-resources.md).
4346

4447
> [!IMPORTANT]
45-
> This template requires App Configuration resource provider version `2020-07-01-preview` or later. This version uses the `reference` function to read key-values. The `listKeyValue` function that was used to read key-values in the previous version is not available starting in version `2020-07-01-preview`.
48+
> This template requires App Configuration control plane API version `2022-05-01` or later. This version uses the `reference` function to read key-values. The `listKeyValue` function that was used to read key-values in the previous version is not available starting in version `2020-07-01-preview`.
4649
4750
:::code language="json" source="~/quickstart-templates/quickstarts/microsoft.appconfiguration/app-configuration-store-kv/azuredeploy.json":::
4851

4952
Two Azure resources are defined in the template:
5053

51-
- [Microsoft.AppConfiguration/configurationStores](/azure/templates/microsoft.appconfiguration/2020-07-01-preview/configurationstores): create an App Configuration store.
52-
- [Microsoft.AppConfiguration/configurationStores/keyValues](/azure/templates/microsoft.appconfiguration/2020-07-01-preview/configurationstores/keyvalues): create a key-value inside the App Configuration store.
54+
- [Microsoft.AppConfiguration/configurationStores](/azure/templates/microsoft.appconfiguration/configurationstores): create an App Configuration store.
55+
- [Microsoft.AppConfiguration/configurationStores/keyValues](/azure/templates/microsoft.appconfiguration/configurationstores/keyvalues): create a key-value inside the App Configuration store.
5356

5457
> [!TIP]
5558
> The `keyValues` resource's name is a combination of key and label. The key and label are joined by the `$` delimiter. The label is optional. In the above example, the `keyValues` resource with name `myKey` creates a key-value without a label.

articles/azure-monitor/containers/prometheus-remote-write-active-directory.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -156,7 +156,7 @@ This step is required only if you didn't turn on Azure Key Vault Provider for Se
156156
| Value | Description |
157157
|:---|:---|
158158
| `<CLUSTER-NAME>` | The name of your AKS cluster. |
159-
| `<CONTAINER-IMAGE-VERSION>` | `mcr.microsoft.com/azuremonitor/prometheus/promdev/prom-remotewrite:prom-remotewrite-20230906.1`<br>The remote write container image version. |
159+
| `<CONTAINER-IMAGE-VERSION>` | `mcr.microsoft.com/azuremonitor/containerinsights/ciprod/prometheus-remote-write/images:prom-remotewrite-20240507.1`<br>The remote write container image version. |
160160
| `<INGESTION-URL>` | The value for **Metrics ingestion endpoint** from the **Overview** page for the Azure Monitor workspace. |
161161
| `<APP-REGISTRATION -CLIENT-ID>` | The client ID of your application. |
162162
| `<TENANT-ID>` | The tenant ID of the Microsoft Entra application. |

articles/azure-monitor/containers/prometheus-remote-write-azure-ad-pod-identity.md

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -98,6 +98,21 @@ The `aadpodidbinding` label must be added to the Prometheus pod for the pod-mana
9898

9999
[!INCLUDE[pod-identity-yaml](../includes/prometheus-sidecar-remote-write-pod-identity-yaml.md)]
100100

101+
1. Replace the following values in the YAML:
102+
103+
| Value | Description |
104+
|:---|:---|
105+
| `<AKS-CLUSTER-NAME>` | The name of your AKS cluster. |
106+
| `<CONTAINER-IMAGE-VERSION>` | `mcr.microsoft.com/azuremonitor/containerinsights/ciprod/prometheus-remote-write/images:prom-remotewrite-20240507.1`<br> The remote write container image version. |
107+
| `<INGESTION-URL>` | The value for **Metrics ingestion endpoint** from the **Overview** page for the Azure Monitor workspace. |
108+
| `<MANAGED-IDENTITY-CLIENT-ID>` | The value for **Client ID** from the **Overview** page for the managed identity. |
109+
| `<CLUSTER-NAME>` | Name of the cluster that Prometheus is running on. |
110+
111+
> [!IMPORTANT]
112+
> For Azure Government cloud, add the following environment variables in the `env` section of the YAML file:
113+
>
114+
> `- name: INGESTION_AAD_AUDIENCE value: https://monitor.azure.us/`
115+
101116
1. Use Helm to apply the YAML file and update your Prometheus configuration:
102117

103118
```azurecli

0 commit comments

Comments
 (0)