Skip to content

Commit 1e4403a

Browse files
authored
Merge pull request #274975 from MicrosoftDocs/main
Publish to live, Friday 4 AM PST, 5/10
2 parents e391427 + c847215 commit 1e4403a

File tree

21 files changed

+452
-42
lines changed

21 files changed

+452
-42
lines changed

articles/ai-studio/how-to/prompt-flow-tools/llm-tool.md

Lines changed: 176 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -17,14 +17,51 @@ author: lgayhardt
1717

1818
[!INCLUDE [Azure AI Studio preview](../../includes/preview-ai-studio.md)]
1919

20-
To use large language models (LLMs) for natural language processing, you use the prompt flow LLM tool.
20+
The large language model (LLM) tool in prompt flow enables you to take advantage of widely used large language models like [OpenAI](https://platform.openai.com/), [Azure OpenAI Service](../../../ai-services/openai/overview.md), and models in [Azure AI Studio model catalog](../model-catalog.md) for natural language processing.
21+
> [!NOTE]
22+
> The previous version of the LLM tool is now being deprecated. Please upgrade to latest [promptflow-tools](https://pypi.org/project/promptflow-tools/) package to consume new llm tools.
23+
24+
Prompt flow provides a few different large language model APIs:
25+
26+
- [Completion](https://platform.openai.com/docs/api-reference/completions): OpenAI's completion models generate text based on provided prompts.
27+
- [Chat](https://platform.openai.com/docs/api-reference/chat): OpenAI's chat models facilitate interactive conversations with text-based inputs and responses.
2128

2229
> [!NOTE]
23-
> For embeddings to convert text into dense vector representations for various natural language processing tasks, see [Embedding tool](embedding-tool.md).
30+
> Don't use non-ascii characters in resource group name of Azure OpenAI resource, prompt flow didn't support this case.
2431
2532
## Prerequisites
2633

27-
Prepare a prompt as described in the [Prompt tool](prompt-tool.md#prerequisites) documentation. The LLM tool and Prompt tool both support [Jinja](https://jinja.palletsprojects.com/en/3.1.x/) templates. For more information and best practices, see [Prompt engineering techniques](../../../ai-services/openai/concepts/advanced-prompt-engineering.md).
34+
Create OpenAI resources, Azure OpenAI resources, or MaaS deployment with the LLM models (for example: llama2, mistral, cohere etc.) in Azure AI Studio model catalog:
35+
36+
- **OpenAI**:
37+
38+
- Sign up your account on the [OpenAI website](https://openai.com/).
39+
- Sign in and [find your personal API key](https://platform.openai.com/account/api-keys).
40+
41+
- **Azure OpenAI**:
42+
43+
- [Create Azure OpenAI resources](../../../ai-services/openai/how-to/create-resource.md).
44+
45+
- **MaaS deployment**:
46+
47+
[Create MaaS deployment for models in Azure AI Studio model catalog](../../concepts/deployments-overview.md#deploy-models-with-model-as-a-service).
48+
49+
You can create serverless connection to use this MaaS deployment.
50+
51+
## Connections
52+
53+
Set up connections to provisioned resources in prompt flow.
54+
55+
| Type | Name | API key | API base | API type | API version |
56+
|-------------|----------|----------|----------|----------|-------------|
57+
| OpenAI | Required | Required | - | - | - |
58+
| Azure OpenAI| Required | Required | Required | Required | Required |
59+
| Serverless | Required | Required | Required | - | - |
60+
61+
> [!TIP]
62+
> - To use Microsoft Entra ID auth type for Azure OpenAI connection, you need assign either the `Cognitive Services OpenAI User` or `Cognitive Services OpenAI Contributor role` to user or user assigned managed identity.
63+
> - Learn more about [how to specify to use user identity to submit flow run](../create-manage-runtime.md#create-an-automatic-runtime-on-a-flow-page).
64+
> - Learn more about [How to configure Azure OpenAI Service with managed identities](../../../ai-services/openai/how-to/managed-identity.md).
2865
2966
## Build with the LLM tool
3067

@@ -35,7 +72,7 @@ Prepare a prompt as described in the [Prompt tool](prompt-tool.md#prerequisites)
3572

3673
1. Select the connection to one of your provisioned resources. For example, select **Default_AzureOpenAI**.
3774
1. From the **Api** dropdown list, select **chat** or **completion**.
38-
1. Enter values for the LLM tool input parameters described in the [Text completion inputs table](#inputs). If you selected the **chat** API, see the [Chat inputs table](#chat-inputs). If you selected the **completion** API, see the [Text completion inputs table](#text-completion-inputs). For information about how to prepare the prompt input, see [Prerequisites](#prerequisites).
75+
1. Enter values for the LLM tool input parameters described in the [Text completion inputs table](#inputs). If you selected the **chat** API, see the [Chat inputs table](#chat-inputs). If you selected the **completion** API, see the [Text completion inputs table](#text-completion-inputs). For information about how to prepare the prompt input, see [How to write a prompt](#how-to-write-a-prompt).
3976
1. Add more tools to your flow, as needed. Or select **Run** to run the flow.
4077
1. The outputs are described in the [Outputs table](#outputs).
4178

@@ -74,15 +111,146 @@ The following input parameters are available.
74111
| presence\_penalty | float | The value that controls the model's behavior regarding repeating phrases. Default is 0. | No |
75112
| frequency\_penalty | float | The value that controls the model's behavior regarding generating rare phrases. Default is 0. | No |
76113
| logit\_bias | dictionary | The logit bias for the language model. Default is empty dictionary. | No |
114+
| tool\_choice | object | Value that controls which tool is called by the model. Default is null. | No |
115+
| tools | list | A list of tools the model may generate JSON inputs for. Default is null. | No |
116+
| response_format | object | An object specifying the format that the model must output. Default is null. | No |
77117

78118
## Outputs
79119

80120
The output varies depending on the API you selected for inputs.
81121

82-
| API | Return type | Description |
83-
|------------|-------------|------------------------------------------|
84-
| Completion | string | The text of one predicted completion. |
85-
| Chat | string | The text of one response of conversation. |
122+
| Return type | Description |
123+
|-------------|------------------------------------------|
124+
| string | Text of one predicted completion or response of conversation |
125+
126+
## How to write a prompt?
127+
128+
Prepare a prompt as described in the [Prompt tool](prompt-tool.md#prerequisites) documentation. The LLM tool and Prompt tool both support [Jinja](https://jinja.palletsprojects.com/en/3.1.x/) templates. For more information and best practices, see [Prompt engineering techniques](../../../ai-services/openai/concepts/advanced-prompt-engineering.md).
129+
130+
For example, for a chat prompt we offer a method to distinguish between different roles in a chat prompt, such as "system", "user", "assistant" and "tool". The "system", "user", "assistant" roles can have "name" and "content" properties. The "tool" role, however, should have "tool_call_id" and "content" properties. For an example of a tool chat prompt, please refer to [Sample 3](#sample-3).
131+
132+
### Sample 1
133+
```jinja
134+
# system:
135+
You are a helpful assistant.
136+
137+
{% for item in chat_history %}
138+
# user:
139+
{{item.inputs.question}}
140+
# assistant:
141+
{{item.outputs.answer}}
142+
{% endfor %}
143+
144+
# user:
145+
{{question}}
146+
```
147+
148+
In LLM tool, the prompt is transformed to match the [OpenAI messages](https://platform.openai.com/docs/api-reference/chat/create#chat-create-messages) structure before sending to OpenAI chat API.
149+
150+
```
151+
[
152+
{
153+
"role": "system",
154+
"content": "You are a helpful assistant."
155+
},
156+
{
157+
"role": "user",
158+
"content": "<question-of-chat-history-round-1>"
159+
},
160+
{
161+
"role": "assistant",
162+
"content": "<answer-of-chat-history-round-1>"
163+
},
164+
...
165+
{
166+
"role": "user",
167+
"content": "<question>"
168+
}
169+
]
170+
```
171+
172+
### Sample 2
173+
```jinja
174+
# system:
175+
{# For role naming customization, the following syntax is used #}
176+
## name:
177+
Alice
178+
## content:
179+
You are a bot can tell good jokes.
180+
```
181+
182+
In LLM tool, the prompt is transformed to match the [OpenAI messages](https://platform.openai.com/docs/api-reference/chat/create#chat-create-messages) structure before sending to OpenAI chat API.
183+
184+
```
185+
[
186+
{
187+
"role": "system",
188+
"name": "Alice",
189+
"content": "You are a bot can tell good jokes."
190+
}
191+
]
192+
```
193+
194+
### Sample 3
195+
This sample illustrates how to write a tool chat prompt.
196+
```jinja
197+
# system:
198+
You are a helpful assistant.
199+
# user:
200+
What is the current weather like in Boston?
201+
# assistant:
202+
{# The assistant message with 'tool_calls' must be followed by messages with role 'tool'. #}
203+
## tool_calls:
204+
{{llm_output.tool_calls}}
205+
# tool:
206+
{#
207+
Messages with role 'tool' must be a response to a preceding message with 'tool_calls'.
208+
Additionally, 'tool_call_id's should match ids of assistant message 'tool_calls'.
209+
#}
210+
## tool_call_id:
211+
{{llm_output.tool_calls[0].id}}
212+
## content:
213+
{{tool-answer-of-last-question}}
214+
# user:
215+
{{question}}
216+
```
217+
218+
In LLM tool, the prompt is transformed to match the [OpenAI messages](https://platform.openai.com/docs/api-reference/chat/create#chat-create-messages) structure before sending to OpenAI chat API.
219+
220+
```
221+
[
222+
{
223+
"role": "system",
224+
"content": "You are a helpful assistant."
225+
},
226+
{
227+
"role": "user",
228+
"content": "What is the current weather like in Boston?"
229+
},
230+
{
231+
"role": "assistant",
232+
"content": null,
233+
"function_call": null,
234+
"tool_calls": [
235+
{
236+
"id": "<tool-call-id-of-last-question>",
237+
"type": "function",
238+
"function": "<function-to-call-of-last-question>"
239+
}
240+
]
241+
},
242+
{
243+
"role": "tool",
244+
"tool_call_id": "<tool-call-id-of-last-question>",
245+
"content": "<tool-answer-of-last-question>"
246+
}
247+
...
248+
{
249+
"role": "user",
250+
"content": "<question>"
251+
}
252+
]
253+
```
86254

87255
## Next steps
88256

articles/ai-studio/how-to/prompt-flow-tools/prompt-flow-tools-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ The following table provides an index of tools in prompt flow.
1919

2020
| Tool name | Description | Package name |
2121
|------|-----------|-------------|--------------|
22-
| [LLM](./llm-tool.md) | Use large language models (LLM) with the Azure OpenAI Service for tasks such as text completion or chat. | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
22+
| [LLM](./llm-tool.md) | Use large language models (LLM) for tasks such as text completion or chat. | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
2323
| [Prompt](./prompt-tool.md) | Craft a prompt by using Jinja as the templating language. | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
2424
| [Python](./python-tool.md) | Run Python code. | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
2525
| [Azure OpenAI GPT-4 Turbo with Vision](./azure-open-ai-gpt-4v-tool.md) | Use an Azure OpenAI GPT-4 Turbo with Vision model deployment to analyze images and provide textual responses to questions about them. | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
-46.7 KB
Loading

articles/azure-signalr/signalr-howto-work-with-app-gateway.md

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: How to use SignalR Service with Azure Application Gateway
33
description: This article provides information about using Azure SignalR Service with Azure Application Gateway.
44
author: vicancy
55
ms.author: lianwei
6-
ms.date: 08/16/2022
6+
ms.date: 05/10/2024
77
ms.service: signalr
88
ms.topic: how-to
99
---
@@ -241,24 +241,26 @@ Let's deploy the Chat application into the same VNet with **_ASRS1_** so that th
241241

242242
### Deploy the chat application to Azure
243243

244-
- On the [Azure portal](https://portal.azure.com/), search for **App services** and **Create**.
244+
- On the [Azure portal](https://portal.azure.com/), search for **App services** and **Create** **Web App**.
245245

246-
- On the **Basics** tab, use these values for the following application gateway settings:
246+
- On the **Basics** tab, use these values for the following web app settings:
247247
- **Subscription** and **Resource group** and **Region**: the same as what you choose for SignalR Service
248248
- **Name**: **_WA1_**
249249
* **Publish**: **Code**
250250
* **Runtime stack**: **.NET 6 (LTS)**
251251
* **Operating System**: **Linux**
252252
* **Region**: Make sure it's the same as what you choose for SignalR Service
253-
* Select **Next: Docker**
253+
* Select **Next: Deployment**, keep all as default, and select **Next:Networking**
254254
- On the **Networking** tab
255255
- **Enable network injection**: select **On**
256256
- **Virtual Network**: select **_VN1_** we previously created
257257
- **Enable VNet integration**: **On**
258258
- **Outbound subnet**: create a new subnet
259259
- Select **Review + create**
260260

261-
Now let's deploy our chat application to Azure. Below we use Azure CLI to deploy the web app, you can also choose other deployment environments following [publish your web app section](/azure/app-service/quickstart-dotnetcore#publish-your-web-app).
261+
Now let's deploy our chat application to Azure. Below
262+
263+
We use Azure CLI to deploy our chat application to Azure. Check [Quickstart: Deploy an ASP.NET web app](/azure/app-service/quickstart-dotnetcore) for other deployment environments deploying to Azure.
262264

263265
Under folder samples/Chatroom, run the below commands:
264266

@@ -271,7 +273,7 @@ zip -r app.zip .
271273
# use az CLI to deploy app.zip to our webapp
272274
az login
273275
az account set -s <your-subscription-name-used-to-create-WA1>
274-
az webapp deployment source config-zip -n WA1 -g <resource-group-of-WA1> --src app.zip
276+
az webapp deploy -g <resource-group-of-WA1> -n WA1 --src-path app.zip
275277
```
276278

277279
Now the web app is deployed, let's go to the portal for **_WA1_** and make the following updates:

articles/hdinsight/TOC.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -651,6 +651,8 @@ items:
651651
href: ./hadoop/troubleshoot-wasbs-storage-exception.md
652652
- name: Manage disk space
653653
href: ./hadoop/troubleshoot-disk-space.md
654+
- name: Ambari user configs migration
655+
href: ./migrate-ambari-recent-version-hdinsight.md
654656
- name: Apache Kafka
655657
items:
656658
- name: Overview
85.2 KB
Loading
98.1 KB
Loading
178 KB
Loading
10.2 KB
Loading
63.8 KB
Loading

0 commit comments

Comments
 (0)