Skip to content

Commit e7b9f23

Browse files
authored
Merge pull request #273822 from ChenJieting/jieting/llm-tool-update
update the llm tool doc for build
2 parents 6ced711 + ad76454 commit e7b9f23

File tree

5 files changed

+336
-31
lines changed

5 files changed

+336
-31
lines changed

articles/ai-studio/how-to/prompt-flow-tools/llm-tool.md

Lines changed: 176 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -17,14 +17,51 @@ author: lgayhardt
1717

1818
[!INCLUDE [Azure AI Studio preview](../../includes/preview-ai-studio.md)]
1919

20-
To use large language models (LLMs) for natural language processing, you use the prompt flow LLM tool.
20+
The large language model (LLM) tool in prompt flow enables you to take advantage of widely used large language models like [OpenAI](https://platform.openai.com/), [Azure OpenAI Service](../../../ai-services/openai/overview.md), and models in [Azure AI Studio model catalog](../model-catalog.md) for natural language processing.
21+
> [!NOTE]
22+
> The previous version of the LLM tool is now being deprecated. Please upgrade to latest [promptflow-tools](https://pypi.org/project/promptflow-tools/) package to consume new llm tools.
23+
24+
Prompt flow provides a few different large language model APIs:
25+
26+
- [Completion](https://platform.openai.com/docs/api-reference/completions): OpenAI's completion models generate text based on provided prompts.
27+
- [Chat](https://platform.openai.com/docs/api-reference/chat): OpenAI's chat models facilitate interactive conversations with text-based inputs and responses.
2128

2229
> [!NOTE]
23-
> For embeddings to convert text into dense vector representations for various natural language processing tasks, see [Embedding tool](embedding-tool.md).
30+
> Don't use non-ascii characters in resource group name of Azure OpenAI resource, prompt flow didn't support this case.
2431
2532
## Prerequisites
2633

27-
Prepare a prompt as described in the [Prompt tool](prompt-tool.md#prerequisites) documentation. The LLM tool and Prompt tool both support [Jinja](https://jinja.palletsprojects.com/en/3.1.x/) templates. For more information and best practices, see [Prompt engineering techniques](../../../ai-services/openai/concepts/advanced-prompt-engineering.md).
34+
Create OpenAI resources, Azure OpenAI resources, or MaaS deployment with the LLM models (for example: llama2, mistral, cohere etc.) in Azure AI Studio model catalog:
35+
36+
- **OpenAI**:
37+
38+
- Sign up your account on the [OpenAI website](https://openai.com/).
39+
- Sign in and [find your personal API key](https://platform.openai.com/account/api-keys).
40+
41+
- **Azure OpenAI**:
42+
43+
- [Create Azure OpenAI resources](../../../ai-services/openai/how-to/create-resource.md).
44+
45+
- **MaaS deployment**:
46+
47+
[Create MaaS deployment for models in Azure AI Studio model catalog](../../concepts/deployments-overview.md#deploy-models-with-model-as-a-service).
48+
49+
You can create serverless connection to use this MaaS deployment.
50+
51+
## Connections
52+
53+
Set up connections to provisioned resources in prompt flow.
54+
55+
| Type | Name | API key | API base | API type | API version |
56+
|-------------|----------|----------|----------|----------|-------------|
57+
| OpenAI | Required | Required | - | - | - |
58+
| Azure OpenAI| Required | Required | Required | Required | Required |
59+
| Serverless | Required | Required | Required | - | - |
60+
61+
> [!TIP]
62+
> - To use Microsoft Entra ID auth type for Azure OpenAI connection, you need assign either the `Cognitive Services OpenAI User` or `Cognitive Services OpenAI Contributor role` to user or user assigned managed identity.
63+
> - Learn more about [how to specify to use user identity to submit flow run](../create-manage-runtime.md#create-an-automatic-runtime-on-a-flow-page).
64+
> - Learn more about [How to configure Azure OpenAI Service with managed identities](../../../ai-services/openai/how-to/managed-identity.md).
2865
2966
## Build with the LLM tool
3067

@@ -35,7 +72,7 @@ Prepare a prompt as described in the [Prompt tool](prompt-tool.md#prerequisites)
3572

3673
1. Select the connection to one of your provisioned resources. For example, select **Default_AzureOpenAI**.
3774
1. From the **Api** dropdown list, select **chat** or **completion**.
38-
1. Enter values for the LLM tool input parameters described in the [Text completion inputs table](#inputs). If you selected the **chat** API, see the [Chat inputs table](#chat-inputs). If you selected the **completion** API, see the [Text completion inputs table](#text-completion-inputs). For information about how to prepare the prompt input, see [Prerequisites](#prerequisites).
75+
1. Enter values for the LLM tool input parameters described in the [Text completion inputs table](#inputs). If you selected the **chat** API, see the [Chat inputs table](#chat-inputs). If you selected the **completion** API, see the [Text completion inputs table](#text-completion-inputs). For information about how to prepare the prompt input, see [How to write a prompt](#how-to-write-a-prompt).
3976
1. Add more tools to your flow, as needed. Or select **Run** to run the flow.
4077
1. The outputs are described in the [Outputs table](#outputs).
4178

@@ -74,15 +111,146 @@ The following input parameters are available.
74111
| presence\_penalty | float | The value that controls the model's behavior regarding repeating phrases. Default is 0. | No |
75112
| frequency\_penalty | float | The value that controls the model's behavior regarding generating rare phrases. Default is 0. | No |
76113
| logit\_bias | dictionary | The logit bias for the language model. Default is empty dictionary. | No |
114+
| tool\_choice | object | Value that controls which tool is called by the model. Default is null. | No |
115+
| tools | list | A list of tools the model may generate JSON inputs for. Default is null. | No |
116+
| response_format | object | An object specifying the format that the model must output. Default is null. | No |
77117

78118
## Outputs
79119

80120
The output varies depending on the API you selected for inputs.
81121

82-
| API | Return type | Description |
83-
|------------|-------------|------------------------------------------|
84-
| Completion | string | The text of one predicted completion. |
85-
| Chat | string | The text of one response of conversation. |
122+
| Return type | Description |
123+
|-------------|------------------------------------------|
124+
| string | Text of one predicted completion or response of conversation |
125+
126+
## How to write a prompt?
127+
128+
Prepare a prompt as described in the [Prompt tool](prompt-tool.md#prerequisites) documentation. The LLM tool and Prompt tool both support [Jinja](https://jinja.palletsprojects.com/en/3.1.x/) templates. For more information and best practices, see [Prompt engineering techniques](../../../ai-services/openai/concepts/advanced-prompt-engineering.md).
129+
130+
For example, for a chat prompt we offer a method to distinguish between different roles in a chat prompt, such as "system", "user", "assistant" and "tool". The "system", "user", "assistant" roles can have "name" and "content" properties. The "tool" role, however, should have "tool_call_id" and "content" properties. For an example of a tool chat prompt, please refer to [Sample 3](#sample-3).
131+
132+
### Sample 1
133+
```jinja
134+
# system:
135+
You are a helpful assistant.
136+
137+
{% for item in chat_history %}
138+
# user:
139+
{{item.inputs.question}}
140+
# assistant:
141+
{{item.outputs.answer}}
142+
{% endfor %}
143+
144+
# user:
145+
{{question}}
146+
```
147+
148+
In LLM tool, the prompt is transformed to match the [OpenAI messages](https://platform.openai.com/docs/api-reference/chat/create#chat-create-messages) structure before sending to OpenAI chat API.
149+
150+
```
151+
[
152+
{
153+
"role": "system",
154+
"content": "You are a helpful assistant."
155+
},
156+
{
157+
"role": "user",
158+
"content": "<question-of-chat-history-round-1>"
159+
},
160+
{
161+
"role": "assistant",
162+
"content": "<answer-of-chat-history-round-1>"
163+
},
164+
...
165+
{
166+
"role": "user",
167+
"content": "<question>"
168+
}
169+
]
170+
```
171+
172+
### Sample 2
173+
```jinja
174+
# system:
175+
{# For role naming customization, the following syntax is used #}
176+
## name:
177+
Alice
178+
## content:
179+
You are a bot can tell good jokes.
180+
```
181+
182+
In LLM tool, the prompt is transformed to match the [OpenAI messages](https://platform.openai.com/docs/api-reference/chat/create#chat-create-messages) structure before sending to OpenAI chat API.
183+
184+
```
185+
[
186+
{
187+
"role": "system",
188+
"name": "Alice",
189+
"content": "You are a bot can tell good jokes."
190+
}
191+
]
192+
```
193+
194+
### Sample 3
195+
This sample illustrates how to write a tool chat prompt.
196+
```jinja
197+
# system:
198+
You are a helpful assistant.
199+
# user:
200+
What is the current weather like in Boston?
201+
# assistant:
202+
{# The assistant message with 'tool_calls' must be followed by messages with role 'tool'. #}
203+
## tool_calls:
204+
{{llm_output.tool_calls}}
205+
# tool:
206+
{#
207+
Messages with role 'tool' must be a response to a preceding message with 'tool_calls'.
208+
Additionally, 'tool_call_id's should match ids of assistant message 'tool_calls'.
209+
#}
210+
## tool_call_id:
211+
{{llm_output.tool_calls[0].id}}
212+
## content:
213+
{{tool-answer-of-last-question}}
214+
# user:
215+
{{question}}
216+
```
217+
218+
In LLM tool, the prompt is transformed to match the [OpenAI messages](https://platform.openai.com/docs/api-reference/chat/create#chat-create-messages) structure before sending to OpenAI chat API.
219+
220+
```
221+
[
222+
{
223+
"role": "system",
224+
"content": "You are a helpful assistant."
225+
},
226+
{
227+
"role": "user",
228+
"content": "What is the current weather like in Boston?"
229+
},
230+
{
231+
"role": "assistant",
232+
"content": null,
233+
"function_call": null,
234+
"tool_calls": [
235+
{
236+
"id": "<tool-call-id-of-last-question>",
237+
"type": "function",
238+
"function": "<function-to-call-of-last-question>"
239+
}
240+
]
241+
},
242+
{
243+
"role": "tool",
244+
"tool_call_id": "<tool-call-id-of-last-question>",
245+
"content": "<tool-answer-of-last-question>"
246+
}
247+
...
248+
{
249+
"role": "user",
250+
"content": "<question>"
251+
}
252+
]
253+
```
86254

87255
## Next steps
88256

articles/ai-studio/how-to/prompt-flow-tools/prompt-flow-tools-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ The following table provides an index of tools in prompt flow.
1919

2020
| Tool name | Description | Package name |
2121
|------|-----------|-------------|--------------|
22-
| [LLM](./llm-tool.md) | Use large language models (LLM) with the Azure OpenAI Service for tasks such as text completion or chat. | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
22+
| [LLM](./llm-tool.md) | Use large language models (LLM) for tasks such as text completion or chat. | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
2323
| [Prompt](./prompt-tool.md) | Craft a prompt by using Jinja as the templating language. | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
2424
| [Python](./python-tool.md) | Run Python code. | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
2525
| [Azure OpenAI GPT-4 Turbo with Vision](./azure-open-ai-gpt-4v-tool.md) | Use an Azure OpenAI GPT-4 Turbo with Vision model deployment to analyze images and provide textual responses to questions about them. | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
-46.7 KB
Loading

0 commit comments

Comments
 (0)