Skip to content

Commit 8533290

Browse files
authored
Merge pull request #267080 from likebupt/update-pf-tool-20240223
update pf tool articles
2 parents ed0c661 + ede46a8 commit 8533290

File tree

5 files changed

+21
-11
lines changed

5 files changed

+21
-11
lines changed

articles/ai-studio/how-to/prompt-flow-tools/azure-open-ai-gpt-4v-tool.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ The following are available input parameters:
7474
| ---- | ---- | ----------- | -------- |
7575
| connection | AzureOpenAI | The Azure OpenAI connection to be used in the tool. | Yes |
7676
| deployment\_name | string | The language model to use. | Yes |
77-
| prompt | string | Text prompt that the language model uses to generate its response. | Yes |
77+
| prompt | string | Text prompt that the language model uses to generate its response. The Jinja template for composing prompts in this tool follows a similar structure to the chat API in the LLM tool. To represent an image input within your prompt, you can use the syntax `![image]({{INPUT NAME}})`. Image input can be passed in the `user`, `system` and `assistant` messages. | Yes |
7878
| max\_tokens | integer | Maximum number of tokens to generate in the response. Default is 512. | No |
7979
| temperature | float | Randomness of the generated text. Default is 1. | No |
8080
| stop | list | Stopping sequence for the generated text. Default is null. | No |
@@ -90,7 +90,7 @@ The following are available output parameters:
9090
|-------------|------------------------------------------|
9191
| string | The text of one response of conversation |
9292

93-
## Next steps
94-
95-
- [Learn more about how to create a flow](../flow-develop.md)
93+
## Next step
9694

95+
- Learn more about [how to process images in prompt flow](../flow-process-image.md).
96+
- [Learn more about how to create a flow](../flow-develop.md).

articles/ai-studio/how-to/prompt-flow-tools/python-tool.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -111,13 +111,15 @@ If you're developing a python tool that requires calling external services with
111111

112112
Create a custom connection that stores all your LLM API KEY or other required credentials.
113113

114-
1. Go to Prompt flow in your workspace, then go to **connections** tab.
115-
1. Select **Create** and select **Custom**.
116-
1. In the right panel, you can define your connection name, and you can add multiple *Key-value pairs* to store your credentials and keys by selecting **Add key-value pairs**.
114+
1. Go to **AI project settings**, then select **New Connection**.
115+
1. Select **Custom** service. You can define your connection name, and you can add multiple *Key-value pairs* to store your credentials and keys by selecting **Add key-value pairs**.
117116

118117
> [!NOTE]
119118
> Make sure at least one key-value pair is set as secret, otherwise the connection will not be created successfully. You can set one Key-Value pair as secret by **is secret** checked, which will be encrypted and stored in your key value.
120119

120+
:::image type="content" source="../../media/prompt-flow/create-connection.png" alt-text="Screenshot that shows create connection in AI Studio." lightbox = "../../media/prompt-flow/create-connection.png":::
121+
122+
121123
1. Add the following custom keys to the connection:
122124
- `azureml.flow.connection_type`: `Custom`
123125
- `azureml.flow.module`: `promptflow.connections`
163 KB
Loading

articles/machine-learning/prompt-flow/tools-reference/azure-open-ai-gpt-4v-tool.md

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ Azure OpenAI GPT-4 Turbo with Vision tool enables you to leverage your AzureOpen
3434

3535
## Connection
3636

37-
Setup connections to provisioned resources in prompt flow.
37+
Set up connections to provisioned resources in prompt flow.
3838

3939
| Type | Name | API KEY | API Type | API Version |
4040
|-------------|----------|----------|----------|-------------|
@@ -46,7 +46,7 @@ Setup connections to provisioned resources in prompt flow.
4646
|------------------------|-------------|------------------------------------------------------------------------------------------------|----------|
4747
| connection | AzureOpenAI | the AzureOpenAI connection to be used in the tool | Yes |
4848
| deployment\_name | string | the language model to use | Yes |
49-
| prompt | string | The text prompt that the language model will use to generate its response. | Yes |
49+
| prompt | string | Text prompt that the language model uses to generate its response. The Jinja template for composing prompts in this tool follows a similar structure to the chat API in the LLM tool. To represent an image input within your prompt, you can use the syntax `![image]({{INPUT NAME}})`. Image input can be passed in the `user`, `system` and `assistant` messages. | Yes |
5050
| max\_tokens | integer | the maximum number of tokens to generate in the response. Default is 512. | No |
5151
| temperature | float | the randomness of the generated text. Default is 1. | No |
5252
| stop | list | the stopping sequence for the generated text. Default is null. | No |
@@ -59,3 +59,7 @@ Setup connections to provisioned resources in prompt flow.
5959
| Return Type | Description |
6060
|-------------|------------------------------------------|
6161
| string | The text of one response of conversation |
62+
63+
## Next step
64+
65+
Learn more about [how to process images in prompt flow](../how-to-process-image.md).

articles/machine-learning/prompt-flow/tools-reference/openai-gpt-4v-tool.md

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ Set up connections to provisioned resources in prompt flow.
4444
|------------------------|-------------|------------------------------------------------------------------------------------------------|----------|
4545
| connection | OpenAI | The OpenAI connection to be used in the tool. | Yes |
4646
| model | string | The language model to use, currently only support gpt-4-vision-preview. | Yes |
47-
| prompt | string | The text prompt that the language model uses to generate its response. | Yes |
47+
| prompt | string | Text prompt that the language model uses to generate its response. The Jinja template for composing prompts in this tool follows a similar structure to the chat API in the LLM tool. To represent an image input within your prompt, you can use the syntax `![image]({{INPUT NAME}})`. Image input can be passed in the `user`, `system` and `assistant` messages. | Yes |
4848
| max\_tokens | integer | The maximum number of tokens to generate in the response. Default is a low value decided by [OpenAI API](https://platform.openai.com/docs/guides/vision). | No |
4949
| temperature | float | The randomness of the generated text. Default is 1. | No |
5050
| stop | list | The stopping sequence for the generated text. Default is null. | No |
@@ -56,4 +56,8 @@ Set up connections to provisioned resources in prompt flow.
5656

5757
| Return Type | Description |
5858
|-------------|------------------------------------------|
59-
| string | The text of one response of conversation |
59+
| string | The text of one response of conversation |
60+
61+
## Next step
62+
63+
Learn more about [how to process images in prompt flow](../how-to-process-image.md).

0 commit comments

Comments
 (0)