You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/prompt-flow/how-to-develop-flow.md
+33-33Lines changed: 33 additions & 33 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
---
2
-
title: Develop a prompt flow
2
+
title: Develop prompt flow
3
3
titleSuffix: Azure Machine Learning
4
4
description: Learn how to develop a prompt flow and a chat flow in Azure Machine Learning studio.
5
5
services: machine-learning
@@ -14,7 +14,7 @@ ms.author: lagayhar
14
14
ms.reviewer: jinzhong
15
15
ms.date: 10/15/2024
16
16
---
17
-
# Develop a prompt flow
17
+
# Develop prompt flow
18
18
19
19
Prompt flow is a development tool that streamlines the development cycle of AI applications that are powered by Large Language Models (LLMs). In this article, you learn how to create and develop a prompt flow and a chat flow in Azure Machine Learning studio.
20
20
@@ -24,7 +24,7 @@ As the momentum for LLM-based AI applications grows, prompt flow provides a comp
24
24
- Easily test, debug, and iterate your flows.
25
25
- Create prompt variants and compare their performance.
26
26
27
-
## Create and develop a prompt flow
27
+
## Create and develop your prompt flow
28
28
29
29
To create a prompt flow, select **Prompt flow** in the Azure Machine Learning studio left navigation, and then select **Create** on the **Prompt flow** page.
30
30
@@ -76,13 +76,13 @@ You can set node **Inputs** and **Outputs** in the following ways:
76
76
77
77
After you finish composing a prompt or Python script, select **Validate and parse input** for the system to automatically parse the node input based on the prompt template and Python function input.
78
78
79
-
You can link nodes by referencing node output. For example, you can reference the LLM node output in the Python node input so the Python node consumes the LLM node output. In the **Graph** view you can see the two nodes linked together.
79
+
You can link nodes by referencing node output. For example, you can reference the LLM node output in the Python node input so the Python node consumes the LLM node output. In the **Graph** view, you can see the two nodes linked together.
80
80
81
81
#### LLM nodes
82
82
83
83
For an LLM node for Azure OpenAI, you need to select **Connection**, **Api**, and **deployment_name**, and set the **Prompt**. You use the connection to securely store and manage secret keys or other sensitive credentials required for interacting with Azure OpenAI.
84
84
85
-
If you don't already have a connection, create it before you add an LLM node, and make sure the Azure OpenAI resource has a **chat** or **completion** deployment. For more information, see [Set up a connection](get-started-with-prompt-flow.md#set-up-a-connection) and [Create a resource and deploy a model using Azure OpenAI](/azure/cognitive-services/openai/how-to/create-resource).
85
+
If you don't already have a connection, create it before you add an LLM node, and make sure the Azure OpenAI resource has a **chat** or **completion** deployment. For more information, see [Set up a connection](get-started-prompt-flow.md#set-up-a-connection) and [Create a resource and deploy a model using Azure OpenAI](/azure/cognitive-services/openai/how-to/create-resource).
86
86
87
87
#### Python nodes
88
88
@@ -159,34 +159,34 @@ To help you manage chat history, `chat_history` in the **Inputs** section is res
159
159
160
160
Chat history is structured as a list of inputs and outputs. All interactions in the chat box, including user chat inputs, generated chat outputs, and other flow inputs and outputs, are automatically stored in chat history.
161
161
162
-
```json
163
-
[
164
-
{
165
-
"inputs": {
166
-
"<flow input 1>": "xxxxxxxxxxxxxxx",
167
-
"<flow input 2>": "xxxxxxxxxxxxxxx",
168
-
"<flow input N>""xxxxxxxxxxxxxxx"
169
-
},
170
-
"outputs": {
171
-
"<flow output 1>": "xxxxxxxxxxxx",
172
-
"<flow output 2>": "xxxxxxxxxxxxx",
173
-
"<flow output M>": "xxxxxxxxxxxxx"
174
-
}
162
+
```json
163
+
[
164
+
{
165
+
"inputs": {
166
+
"<flow input 1>": "xxxxxxxxxxxxxxx",
167
+
"<flow input 2>": "xxxxxxxxxxxxxxx",
168
+
"<flow input N>""xxxxxxxxxxxxxxx"
175
169
},
176
-
{
177
-
"inputs": {
178
-
"<flow input 1>": "xxxxxxxxxxxxxxx",
179
-
"<flow input 2>": "xxxxxxxxxxxxxxx",
180
-
"<flow input N>""xxxxxxxxxxxxxxx"
181
-
},
182
-
"outputs": {
183
-
"<flow output 1>": "xxxxxxxxxxxx",
184
-
"<flow output 2>": "xxxxxxxxxxxxx",
185
-
"<flow output M>": "xxxxxxxxxxxxx"
186
-
}
170
+
"outputs": {
171
+
"<flow output 1>": "xxxxxxxxxxxx",
172
+
"<flow output 2>": "xxxxxxxxxxxxx",
173
+
"<flow output M>": "xxxxxxxxxxxxx"
187
174
}
188
-
]
189
-
```
175
+
},
176
+
{
177
+
"inputs": {
178
+
"<flow input 1>": "xxxxxxxxxxxxxxx",
179
+
"<flow input 2>": "xxxxxxxxxxxxxxx",
180
+
"<flow input N>""xxxxxxxxxxxxxxx"
181
+
},
182
+
"outputs": {
183
+
"<flow output 1>": "xxxxxxxxxxxx",
184
+
"<flow output 2>": "xxxxxxxxxxxxx",
185
+
"<flow output M>": "xxxxxxxxxxxxx"
186
+
}
187
+
}
188
+
]
189
+
```
190
190
> [!NOTE]
191
191
> When you conduct tests in the **Chat** box, you automatically save chat history. For batch runs, you must include chat history within the batch run dataset. If there's no chat history available, set the `chat_history` to an empty list `[]` within the batch run dataset.
192
192
@@ -217,6 +217,6 @@ The **Chat** box provides an interactive way to test your chat flow by simulatin
217
217
218
218
## Related content
219
219
220
-
-[Batch run using more data and evaluate the flow performance](how-to-bulk-test-evaluate-flow.md)
220
+
-[Submit batch run and evaluate a flow](how-to-bulk-test-evaluate-flow.md)
221
221
-[Tune prompts using variants](how-to-tune-prompts-using-variants.md)
222
-
-[Deploy a flow](how-to-deploy-for-real-time-inference.md)
222
+
-[Deploy a flow as a managed online endpoint for real-time inference](how-to-deploy-for-real-time-inference.md)
0 commit comments