Skip to content

Commit 9ea4a1c

Browse files
Merge pull request #275109 from ChenJieting/jieting/llm-tool-doc-revert
return to previous version due to the postponing of llm tool feature ...
2 parents 063004f + 34e0fb2 commit 9ea4a1c

File tree

4 files changed

+24
-329
lines changed

4 files changed

+24
-329
lines changed

articles/ai-studio/how-to/prompt-flow-tools/llm-tool.md

Lines changed: 8 additions & 176 deletions
Original file line numberDiff line numberDiff line change
@@ -17,51 +17,14 @@ author: lgayhardt
1717

1818
[!INCLUDE [Azure AI Studio preview](../../includes/preview-ai-studio.md)]
1919

20-
The large language model (LLM) tool in prompt flow enables you to take advantage of widely used large language models like [OpenAI](https://platform.openai.com/), [Azure OpenAI Service](../../../ai-services/openai/overview.md), and models in [Azure AI Studio model catalog](../model-catalog.md) for natural language processing.
21-
> [!NOTE]
22-
> The previous version of the LLM tool is now being deprecated. Please upgrade to latest [promptflow-tools](https://pypi.org/project/promptflow-tools/) package to consume new llm tools.
23-
24-
Prompt flow provides a few different large language model APIs:
25-
26-
- [Completion](https://platform.openai.com/docs/api-reference/completions): OpenAI's completion models generate text based on provided prompts.
27-
- [Chat](https://platform.openai.com/docs/api-reference/chat): OpenAI's chat models facilitate interactive conversations with text-based inputs and responses.
20+
To use large language models (LLMs) for natural language processing, you use the prompt flow LLM tool.
2821

2922
> [!NOTE]
30-
> Don't use non-ascii characters in resource group name of Azure OpenAI resource, prompt flow didn't support this case.
23+
> For embeddings to convert text into dense vector representations for various natural language processing tasks, see [Embedding tool](embedding-tool.md).
3124
3225
## Prerequisites
3326

34-
Create OpenAI resources, Azure OpenAI resources, or MaaS deployment with the LLM models (for example: llama2, mistral, cohere etc.) in Azure AI Studio model catalog:
35-
36-
- **OpenAI**:
37-
38-
- Sign up your account on the [OpenAI website](https://openai.com/).
39-
- Sign in and [find your personal API key](https://platform.openai.com/account/api-keys).
40-
41-
- **Azure OpenAI**:
42-
43-
- [Create Azure OpenAI resources](../../../ai-services/openai/how-to/create-resource.md).
44-
45-
- **MaaS deployment**:
46-
47-
[Create MaaS deployment for models in Azure AI Studio model catalog](../../concepts/deployments-overview.md#deploy-models-with-model-as-a-service).
48-
49-
You can create serverless connection to use this MaaS deployment.
50-
51-
## Connections
52-
53-
Set up connections to provisioned resources in prompt flow.
54-
55-
| Type | Name | API key | API base | API type | API version |
56-
|-------------|----------|----------|----------|----------|-------------|
57-
| OpenAI | Required | Required | - | - | - |
58-
| Azure OpenAI| Required | Required | Required | Required | Required |
59-
| Serverless | Required | Required | Required | - | - |
60-
61-
> [!TIP]
62-
> - To use Microsoft Entra ID auth type for Azure OpenAI connection, you need assign either the `Cognitive Services OpenAI User` or `Cognitive Services OpenAI Contributor role` to user or user assigned managed identity.
63-
> - Learn more about [how to specify to use user identity to submit flow run](../create-manage-runtime.md#create-an-automatic-runtime-on-a-flow-page).
64-
> - Learn more about [How to configure Azure OpenAI Service with managed identities](../../../ai-services/openai/how-to/managed-identity.md).
27+
Prepare a prompt as described in the [Prompt tool](prompt-tool.md#prerequisites) documentation. The LLM tool and Prompt tool both support [Jinja](https://jinja.palletsprojects.com/en/3.1.x/) templates. For more information and best practices, see [Prompt engineering techniques](../../../ai-services/openai/concepts/advanced-prompt-engineering.md).
6528

6629
## Build with the LLM tool
6730

@@ -72,7 +35,7 @@ Set up connections to provisioned resources in prompt flow.
7235

7336
1. Select the connection to one of your provisioned resources. For example, select **Default_AzureOpenAI**.
7437
1. From the **Api** dropdown list, select **chat** or **completion**.
75-
1. Enter values for the LLM tool input parameters described in the [Text completion inputs table](#inputs). If you selected the **chat** API, see the [Chat inputs table](#chat-inputs). If you selected the **completion** API, see the [Text completion inputs table](#text-completion-inputs). For information about how to prepare the prompt input, see [How to write a prompt](#how-to-write-a-prompt).
38+
1. Enter values for the LLM tool input parameters described in the [Text completion inputs table](#inputs). If you selected the **chat** API, see the [Chat inputs table](#chat-inputs). If you selected the **completion** API, see the [Text completion inputs table](#text-completion-inputs). For information about how to prepare the prompt input, see [Prerequisites](#prerequisites).
7639
1. Add more tools to your flow, as needed. Or select **Run** to run the flow.
7740
1. The outputs are described in the [Outputs table](#outputs).
7841

@@ -111,146 +74,15 @@ The following input parameters are available.
11174
| presence\_penalty | float | The value that controls the model's behavior regarding repeating phrases. Default is 0. | No |
11275
| frequency\_penalty | float | The value that controls the model's behavior regarding generating rare phrases. Default is 0. | No |
11376
| logit\_bias | dictionary | The logit bias for the language model. Default is empty dictionary. | No |
114-
| tool\_choice | object | Value that controls which tool is called by the model. Default is null. | No |
115-
| tools | list | A list of tools the model may generate JSON inputs for. Default is null. | No |
116-
| response_format | object | An object specifying the format that the model must output. Default is null. | No |
11777

11878
## Outputs
11979

12080
The output varies depending on the API you selected for inputs.
12181

122-
| Return type | Description |
123-
|-------------|------------------------------------------|
124-
| string | Text of one predicted completion or response of conversation |
125-
126-
## How to write a prompt?
127-
128-
Prepare a prompt as described in the [Prompt tool](prompt-tool.md#prerequisites) documentation. The LLM tool and Prompt tool both support [Jinja](https://jinja.palletsprojects.com/en/3.1.x/) templates. For more information and best practices, see [Prompt engineering techniques](../../../ai-services/openai/concepts/advanced-prompt-engineering.md).
129-
130-
For example, for a chat prompt we offer a method to distinguish between different roles in a chat prompt, such as "system", "user", "assistant" and "tool". The "system", "user", "assistant" roles can have "name" and "content" properties. The "tool" role, however, should have "tool_call_id" and "content" properties. For an example of a tool chat prompt, please refer to [Sample 3](#sample-3).
131-
132-
### Sample 1
133-
```jinja
134-
# system:
135-
You are a helpful assistant.
136-
137-
{% for item in chat_history %}
138-
# user:
139-
{{item.inputs.question}}
140-
# assistant:
141-
{{item.outputs.answer}}
142-
{% endfor %}
143-
144-
# user:
145-
{{question}}
146-
```
147-
148-
In LLM tool, the prompt is transformed to match the [OpenAI messages](https://platform.openai.com/docs/api-reference/chat/create#chat-create-messages) structure before sending to OpenAI chat API.
149-
150-
```
151-
[
152-
{
153-
"role": "system",
154-
"content": "You are a helpful assistant."
155-
},
156-
{
157-
"role": "user",
158-
"content": "<question-of-chat-history-round-1>"
159-
},
160-
{
161-
"role": "assistant",
162-
"content": "<answer-of-chat-history-round-1>"
163-
},
164-
...
165-
{
166-
"role": "user",
167-
"content": "<question>"
168-
}
169-
]
170-
```
171-
172-
### Sample 2
173-
```jinja
174-
# system:
175-
{# For role naming customization, the following syntax is used #}
176-
## name:
177-
Alice
178-
## content:
179-
You are a bot can tell good jokes.
180-
```
181-
182-
In LLM tool, the prompt is transformed to match the [OpenAI messages](https://platform.openai.com/docs/api-reference/chat/create#chat-create-messages) structure before sending to OpenAI chat API.
183-
184-
```
185-
[
186-
{
187-
"role": "system",
188-
"name": "Alice",
189-
"content": "You are a bot can tell good jokes."
190-
}
191-
]
192-
```
193-
194-
### Sample 3
195-
This sample illustrates how to write a tool chat prompt.
196-
```jinja
197-
# system:
198-
You are a helpful assistant.
199-
# user:
200-
What is the current weather like in Boston?
201-
# assistant:
202-
{# The assistant message with 'tool_calls' must be followed by messages with role 'tool'. #}
203-
## tool_calls:
204-
{{llm_output.tool_calls}}
205-
# tool:
206-
{#
207-
Messages with role 'tool' must be a response to a preceding message with 'tool_calls'.
208-
Additionally, 'tool_call_id's should match ids of assistant message 'tool_calls'.
209-
#}
210-
## tool_call_id:
211-
{{llm_output.tool_calls[0].id}}
212-
## content:
213-
{{tool-answer-of-last-question}}
214-
# user:
215-
{{question}}
216-
```
217-
218-
In LLM tool, the prompt is transformed to match the [OpenAI messages](https://platform.openai.com/docs/api-reference/chat/create#chat-create-messages) structure before sending to OpenAI chat API.
219-
220-
```
221-
[
222-
{
223-
"role": "system",
224-
"content": "You are a helpful assistant."
225-
},
226-
{
227-
"role": "user",
228-
"content": "What is the current weather like in Boston?"
229-
},
230-
{
231-
"role": "assistant",
232-
"content": null,
233-
"function_call": null,
234-
"tool_calls": [
235-
{
236-
"id": "<tool-call-id-of-last-question>",
237-
"type": "function",
238-
"function": "<function-to-call-of-last-question>"
239-
}
240-
]
241-
},
242-
{
243-
"role": "tool",
244-
"tool_call_id": "<tool-call-id-of-last-question>",
245-
"content": "<tool-answer-of-last-question>"
246-
}
247-
...
248-
{
249-
"role": "user",
250-
"content": "<question>"
251-
}
252-
]
253-
```
82+
| API | Return type | Description |
83+
|------------|-------------|------------------------------------------|
84+
| Completion | string | The text of one predicted completion. |
85+
| Chat | string | The text of one response of conversation. |
25486

25587
## Next steps
25688

articles/ai-studio/how-to/prompt-flow-tools/prompt-flow-tools-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ The following table provides an index of tools in prompt flow.
1919

2020
| Tool name | Description | Package name |
2121
|------|-----------|-------------|--------------|
22-
| [LLM](./llm-tool.md) | Use large language models (LLM) for tasks such as text completion or chat. | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
22+
| [LLM](./llm-tool.md) | Use large language models (LLM) with the Azure OpenAI Service for tasks such as text completion or chat. | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
2323
| [Prompt](./prompt-tool.md) | Craft a prompt by using Jinja as the templating language. | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
2424
| [Python](./python-tool.md) | Run Python code. | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
2525
| [Azure OpenAI GPT-4 Turbo with Vision](./azure-open-ai-gpt-4v-tool.md) | Use an Azure OpenAI GPT-4 Turbo with Vision model deployment to analyze images and provide textual responses to questions about them. | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |

0 commit comments

Comments
 (0)