You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-studio/how-to/develop/trace-production-sdk.md
+17-16Lines changed: 17 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,8 +7,8 @@ ms.service: azure-ai-foundry
7
7
ms.custom:
8
8
- build-2024
9
9
ms.topic: how-to
10
-
ms.date: 5/21/2024
11
-
ms.reviewer: keli19
10
+
ms.date: 02/14/2025
11
+
ms.reviewer: none
12
12
ms.author: lagayhar
13
13
author: lgayhardt
14
14
---
@@ -38,13 +38,13 @@ After you test your flow properly, either a flex flow or a DAG flow, you can dep
38
38
You can also [deploy to other platforms, such as Docker container, Kubernetes cluster, and more](https://microsoft.github.io/promptflow/how-to-guides/deploy-a-flow/index.html).
39
39
40
40
> [!NOTE]
41
-
> You need to use the latest prompt flow base image to deploy the flow, so that it support the tracing and feedback collection API.
41
+
> You need to use the latest prompt flow base image to deploy the flow, so that it supports the tracing and feedback collection API.
42
42
43
43
## Enable trace and collect system metrics for your deployment
44
44
45
-
If you're using Azure AI Foundry portal to deploy, then you can turn-on **Application Insights diagnostics** in **Advanced settings** > **Deployment** step in the deployment wizard, in which way the tracing data and system metrics are collected to the project linked to Application Insights.
45
+
If you're using Azure AI Foundry portal to deploy, you can turnon **Application Insights diagnostics** in the**Advanced settings** > **Deployment** step in the deployment wizard, in which way the tracing data and system metrics are collected to the project linked to Application Insights.
46
46
47
-
If you're using SDK or CLI, you can by adding a property `app_insights_enabled: true` in the deployment yaml file that collects data to the project linked to application insights.
47
+
If you're using the SDK or CLI, you can add a property `app_insights_enabled: true` in the deployment yaml file that collects data to the project linked to application insights.
48
48
49
49
```yaml
50
50
app_insights_enabled: true
@@ -58,30 +58,31 @@ environment_variables:
58
58
```
59
59
60
60
> [!NOTE]
61
-
> If you only set `app_insights_enabled: true` but your project doesn't have a linked Application Insights resource, your deployment will not fail but there will be no data collected.
61
+
> If you only set `app_insights_enabled: true` but your project doesn't have a linked Application Insights resource, your deployment won't fail but there will be no data collected.
62
62
>
63
-
> If you specify both `app_insights_enabled: true` and the above environment variable at the same time, the tracing data and metrics will be sent to the project linked to application insights. Hence, if you want to specify a different Application Insights, you only need to keep the environment variable.
63
+
> If you specify both `app_insights_enabled: true` and the previous environment variable at the same time, the tracing data and metrics will be sent to the project linked to application insights. Hence, if you want to specify a different Application Insights, you only need to keep the environment variable.
64
64
>
65
-
> If you deploy to other platforms, you can also use the environment variable `APPLICATIONINSIGHTS_CONNECTION_STRING: <connection_string>` to collect trace data and metrics to speicifed Application Insights.
65
+
> If you deploy to other platforms, you can also use the environment variable `APPLICATIONINSIGHTS_CONNECTION_STRING: <connection_string>` to collect trace data and metrics to specified Application Insights.
66
+
66
67
## View tracing data in Application Insights
67
68
68
69
Traces record specific events or the state of an application during execution. It can include data about function calls, variable values, system events and more. Traces help breakdown an application's components into discrete inputs and outputs, which is crucial for debugging and understanding an application. You can learn more from [here](https://opentelemetry.io/docs/concepts/signals/traces/) on traces. The trace data follows [OpenTelemetry specification](https://opentelemetry.io/docs/specs/otel/).
69
70
70
71
You can view the detailed trace in the specified Application Insights. The following screenshot shows an example of an event of a deployed flow containing multiple nodes. In Application Insights -> Investigate -> Transaction search, and you can select each node to view its detailed trace.
71
72
72
-
The **Dependency** type events record calls from your deployments. The name of that event is the name of flow folder. Learn more about [Transaction search and diagnostics in Application Insights](/azure/azure-monitor/app/transaction-search-and-diagnostics).
73
+
The **Dependency** type events record calls from your deployments. The name of that event is the name of the flow folder. Learn more about [Transaction search and diagnostics in Application Insights](/azure/azure-monitor/app/transaction-search-and-diagnostics).
73
74
74
75
## View system metrics in Application Insights
75
76
76
77
| Metrics Name | Type | Dimensions | Description |
3. You might want to preprocess the image using the [Python tool](./prompt-flow-tools/python-tool.md) before feeding it to the LLM. For example, you can resize or crop the image to a smaller size.
41
41
:::image type="content" source="../media/prompt-flow/how-to-process-image/process-image-using-python.png" alt-text="Screenshot of using python tool to do image preprocessing." lightbox = "../media/prompt-flow/how-to-process-image/process-image-using-python.png":::
42
+
43
+
```python
44
+
from promptflow import tool
45
+
from promptflow.contracts.multimedia import Image as PFImage
> To process images using a Python function, you need to use the `Image`class that you importfrom the `promptflow.contracts.multimedia` package. The `Image`classis used to represent an `Image`type within prompt flow. It is designed to work with image data in byte format, which is convenient when you need to handle or manipulate the image data directly.
44
73
>
45
74
> To return the processed image data, you need to use the `Image`class to wrap the image data. Create an `Image`object by providing the image data inbytesand the [MIMEtype](https://developer.mozilla.org/docs/Web/HTTP/Basics_of_HTTP/MIME_types/Common_types) `mime_type`. The MIMEtype lets the system understand the format of the image data, or it can be `*`for unknown type.
46
75
47
76
4. Run the Python node and check the output. In this example, the Python function returns the processed Image object. Select the image output to preview the image.
If the Image object from Python node is set as the flow output, you can preview the image in the flow output page as well.
78
+
79
+
If the Image objectfrom Python node issetas the flow output, you can preview the image in the flow output page as well.
50
80
51
81
## Use GPT-4V tool
52
82
@@ -56,32 +86,34 @@ Add the [Azure OpenAI GPT-4 Turbo with Vision tool](./prompt-flow-tools/azure-op
56
86
57
87
:::image type="content" source="../media/prompt-flow/how-to-process-image/gpt-4v-tool.png" alt-text="Screenshot of GPT-4V tool." lightbox ="../media/prompt-flow/how-to-process-image/gpt-4v-tool.png":::
58
88
59
-
The Jinja template for composing prompts in the GPT-4V tool follows a similar structure to the chat API in the LLM tool. To represent an image input within your prompt, you can use the syntax ``. Image input can be passed in the `user`, `system` and `assistant` messages.
89
+
The Jinja template for composing prompts in the GPT-4V tool follows a similar structure to the chat APIin the LLM tool. To represent an image input within your prompt, you can use the syntax ``. Image input can be passed in the `user`, `system`,and`assistant` messages.
60
90
61
91
Once you've composed the prompt, select the **Validate and parse input** button to parse the input placeholders. The image input represented by `` will be parsed as image type with the input name as INPUT NAME.
62
92
63
93
You can assign a value to the image input through the following ways:
64
94
65
95
- Reference from the flow input of Image type.
66
96
- Reference from other node's output of Image type.
67
-
- Upload, drag, or paste an image, or specify an image URL or the relative image path.
97
+
- Upload, drag, paste an image, or specify an image URLor the relative image path.
68
98
69
99
## Build a chatbot to process images
70
100
71
101
In this section, you learn how to build a chatbot that can process image and text inputs.
72
102
73
-
Assume you want to build a chatbot that can answer any questions about the image and text together. You can achieve this by following the steps below:
103
+
Assume you want to build a chatbot that can answer any questions about the image and text together. You can achieve this by following the steps in this section.
74
104
75
105
1. Create a **chat flow**.
76
-
1.Add a **chat input**, select the data type as **"list"**. In the chat box, user can input a mixed sequence of texts and images, and prompt flow service will transform that into a list.
106
+
1. In *Inputs*, select the data typeas**"list"**. In the chat box, user can input a mixed sequence of texts and images, and prompt flow service will transform that into a list.
77
107
:::image type="content" source="../media/prompt-flow/how-to-process-image/chat-input-definition.png" alt-text="Screenshot of chat input type configuration." lightbox ="../media/prompt-flow/how-to-process-image/chat-input-definition.png":::
78
-
1. Add **GPT-4V** tool to the flow.
108
+
1. Add **GPT-4V** tool to the flow. You can copy the prompt from the default LLM tool chat and paste it into the GPT4V tool. Then you delete the default LLM tool chat from the flow.
79
109
:::image type="content" source="../media/prompt-flow/how-to-process-image/gpt-4v-tool-in-chatflow.png" alt-text=" Screenshot of GPT-4V tool in chat flow." lightbox ="../media/prompt-flow/how-to-process-image/gpt-4v-tool-in-chatflow.png":::
80
110
81
111
In this example, `{{question}}` refers to the chat input, which is a list of texts and images.
112
+
1. In *Outputs*, change the value of "answer" to the name of your vision tool's output, for example, `${gpt_vision.output}`.
113
+
:::image type="content" source="../media/prompt-flow/how-to-process-image/chat-output-definition.png" alt-text="Screenshot of chat output type configuration." lightbox ="../media/prompt-flow/how-to-process-image/chat-output-definition.png":::
82
114
1. (Optional) You can add any custom logic to the flow to process the GPT-4V output. For example, you can add content safety tool to detect if the answer contains any inappropriate content, andreturn a final answer to the user.
83
115
:::image type="content" source="../media/prompt-flow/how-to-process-image/chat-flow-postprocess.png" alt-text="Screenshot of processing gpt-4v output with content safety tool." lightbox ="../media/prompt-flow/how-to-process-image/chat-flow-postprocess.png":::
84
-
1. Now you can **test the chatbot**. Open the chat window, and input any questions with images. The chatbot will answer the questions based on the image and text inputs. The chat input value is automatically backfilled from the input in the chat window. You can find the texts with images in the chat box which is translated into a list of texts and images.
116
+
1. Now you can **test the chatbot**. Open the chat window, andinputany questions with images. The chatbot will answer the questions based on the image and text inputs. The chat input value is automatically backfilled from the inputin the chat window. You can find the texts with images in the chat box which is translated into a list of texts and images.
85
117
:::image type="content" source="../media/prompt-flow/how-to-process-image/chatbot-test.png" alt-text="Screenshot of chatbot interaction with images." lightbox ="../media/prompt-flow/how-to-process-image/chatbot-test.png":::
86
118
87
119
> [!NOTE]
@@ -98,7 +130,7 @@ A batch run allows you to test the flow with an extensive dataset. There are thr
98
130
-**Public image URL:** You can also reference the image URLin the entry file using this format: `{"data:<mime type>;url": "<image URL>"}`. For example, `{"data:image/png;url": "https://www.example.com/images/1.png"}`.
99
131
-**Base64 string:** A Base64 string can be referenced in the entry file using this format: `{"data:<mime type>;base64": "<base64 string>"}`. For example, `{"data:image/png;base64": "iVBORw0KGgoAAAANSUhEUgAAAGQAAABLAQMAAAC81rD0AAAABGdBTUEAALGPC/xhBQAAACBjSFJNAAB6JgAAgIQAAPoAAACA6AAAdTAAAOpgAAA6mAAAF3CculE8AAAABlBMVEUAAP7////DYP5JAAAAAWJLR0QB/wIt3gAAAAlwSFlzAAALEgAACxIB0t1+/AAAAAd0SU1FB+QIGBcKN7/nP/UAAAASSURBVDjLY2AYBaNgFIwCdAAABBoAAaNglfsAAAAZdEVYdGNvbW1lbnQAQ3JlYXRlZCB3aXRoIEdJTVDnr0DLAAAAJXRFWHRkYXRlOmNyZWF0ZQAyMDIwLTA4LTI0VDIzOjEwOjU1KzAzOjAwkHdeuQAAACV0RVh0ZGF0ZTptb2RpZnkAMjAyMC0wOC0yNFQyMzoxMDo1NSswMzowMOEq5gUAAAAASUVORK5CYII="}`.
100
132
101
-
In summary, prompt flow uses a unique dictionary format to represent an image, which is `{"data:<mime type>;<representation>": "<value>"}`. Here, `<mime type>` refers to HTML standard [MIME](https://developer.mozilla.org/docs/Web/HTTP/Basics_of_HTTP/MIME_types/Common_types) image types, and `<representation>` refers to the supported image representations: `path`,`url` and `base64`.
133
+
In summary, prompt flow uses a unique dictionary format to represent an image, which is`{"data:<mime type>;<representation>": "<value>"}`. Here, `<mime type>` refers to HTML standard [MIME](https://developer.mozilla.org/docs/Web/HTTP/Basics_of_HTTP/MIME_types/Common_types) image types, and`<representation>` refers to the supported image representations: `path`,`url`,and`base64`.
102
134
103
135
### Create a batch run
104
136
@@ -125,7 +157,7 @@ For now, you can test the endpoint by sending request including image inputs.
125
157
126
158
To consume the online endpoint with image input, you should represent the image by using the format`{"data:<mime type>;<representation>": "<value>"}`. In this case, `<representation>` can either be `url`or`base64`.
127
159
128
-
If the flow generates image output, it is returned with `base64` format, for example, `{"data:<mime type>;base64": "<base64 string>"}`.
160
+
If the flow generates image output, it's returned with `base64` format, for example, `{"data:<mime type>;base64": "<base64 string>"}`.
0 commit comments