You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/ff-integrations/AI/ai-agents.md
+17-8Lines changed: 17 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -79,24 +79,26 @@ You can obtain your OpenAI API key from [**OpenAI API Keys**](https://platform.o
79
79
Here, you specify the type of inputs users can send to the AI.
80
80
81
81
-**Text**: Allows users to send text-based messages.
82
-
-**Image**: Enables image input, allowing the AI to analyze photos.
82
+
-**Image**: Enables image input, allowing the agent to analyze photos.
83
+
-**Audio**: (Google Agent only) Allows to send audio messages or voice inputs.
84
+
-**Video**: (Google Agent only) Allows users to send short video clips to analyze.
83
85
84
-
Selecting both means users can send both text and/or images. For example, with an AI Stylist agent, enabling both Text and Image allows users to either describe their outfits in words or upload images of clothing items for analysis.
86
+
Selecting multiple input types makes it easier for users to clearly communicate what they need. Instead of relying only on text descriptions, users can combine inputs—for example, uploading an image along with text to better illustrate their queries and help the agent provide more accurate responses.
85
87
86
88
#### Response Options
87
89
88
90
Defines the type of output you want from the agent. You can select from the following options:
89
91
90
92
-**Text**: Returns plain text responses.
91
93
-**Markdown**: Allows richer formatting (headings, lists, links) if you display content as markdown. For example, An FAQ chatbot that uses formatted bullet points, and bold or italic text to highlight key info.
92
-
-**Data Type (JSON)**: Returns structured data, which can be parsed programmatically. For example, a restaurant finder app might need structured data, e.g., `{ name: 'Pizza Palace', distance: '2.4 miles' }` to display a dynamic map.
94
+
-**JSON**: Returns structured data, which can be parsed programmatically. For example, a restaurant finder app might need structured data, e.g., `{ name: 'Pizza Palace', distance: '2.4 miles' }` to display a dynamic map.
93
95
94
96
#### Model Parameters
95
97
96
98
Here, you can fine-tune how the agent generates responses.
97
99
98
100
-**Temperature**: Controls how creative or random the AI’s responses can be on a scale of 0 to 1. A lower value (e.g., 0.1) makes responses more factual and consistent. A higher value (e.g., 1.0) makes responses more creative and varied (e.g., brainstorming ideas).
99
-
-**Max Tokens**: Sets the response length limit—helpful for cost control or for ensuring short, direct replies.
101
+
-**Max Tokens**: Limits the total number of tokens used, including both the user's request and the agent's response. Adjusting this helps manage costs and ensures concise interactions.
100
102
-**Top P**: Another technique for controlling the variety of words the AI considers. Typically kept at default unless you want fine-tuned sampling control.
101
103
102
104
For example, in a **Blog-Writing Assistant**, you might set a moderate to high temperature for creative phrasing and a high max tokens limit for detailed paragraphs. Conversely, a **Financial Chatbot** would benefit from a lower temperature to deliver consistent, accurate, and stable responses without unnecessary creativity.
@@ -130,10 +132,9 @@ Once configured, click the **Publish** button to make it live.
130
132
</div>
131
133
<p></p>
132
134
133
-
:::info
134
-
135
-
After you successfully deploy the agent, any changes made to its configuration—such as modifying the system message, model, or temperature will require you to redeploy the Agent.
135
+
:::info [For non Google Agents]
136
136
137
+
After you successfully deploy the agent, any changes made to its configuration—such as modifying the system message, model, or temperature will require you to redeploy the Agent. For Google agents, the configuration is stored at client-side, so redeployment isn't necessary.
137
138
:::
138
139
139
140
Now you can use the AI agent in your FlutterFlow app logic using the following actions.
@@ -147,7 +148,15 @@ You can configure the following options for this action:
147
148
-**Select Agent**: Here, you select the specific AI Agent you previously configured.
148
149
-**Conversation ID**: The Conversation ID is a unique identifier you assign to maintain context and continuity across multiple interactions within the same conversation. Using a consistent ID (e.g., `user123_AIStylist_202503181200`) allows the AI to remember past interactions and keep conversations coherent and contextual.
149
150
-**Text Input**: This is where you specify the user's message or input text that the AI agent will process. Typically, this input comes from a widget state (e.g., TextField).
150
-
-**Image Input**: If your agent supports image processing, you can provide an image either from [local device](../../ff-concepts/file-handling/displaying-media.md#uploaded-file) storage or a [network URL](../../ff-concepts/file-handling/displaying-media.md#network).
151
+
-**Image Input**: If your agent supports image processing, you can provide an image.
152
+
-**Audio Input**: If your agent supports audio processing, you can pass audio files.
153
+
-**Video Input**: If your agent can analyze video content, provide a video file.
154
+
155
+
:::info
156
+
- You can send media files either from [**network URL**](../../ff-concepts/file-handling/displaying-media.md#network) or a [**local device**](../../ff-concepts/file-handling/displaying-media.md#uploaded-file) storage.
157
+
- For non-Google agents, we only support network URLs for now. To pass media files from your device, [**upload it first to cloud storage**](../../ff-concepts/file-handling/uploading-files.md#upload-or-save-media-action) and then provide its generated URL.
158
+
:::
159
+
151
160
-**Action Output Variable Name**: This field stores the AI agent's response to let you display the response to users or process it further.
Copy file name to clipboardExpand all lines: docs/resources/projects/libraries.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -190,8 +190,8 @@ Once the library is imported, following resources are accessible for use:
190
190
-[Custom Functions](../../ff-concepts/adding-customization/custom-functions.md), [Actions](../../resources/control-flow/functions/action-flow-editor.md), and [Widgets](../../resources/ui/widgets/intro-widgets.md)
191
191
-[Assets](../../resources/projects/settings/general-settings.md#app-assets) (Note: These are not versioned)
192
192
193
-
:::note
194
-
- Pages and Firestore Collections are still being worked on and may come in future updates.
193
+
:::info
194
+
-[**Pages**](../../resources/ui/pages/intro-pages.md), [**Firestore Collections**](../../ff-integrations/database/cloud-firestore/creating-collections.md), and [**Cloud Functions**](../../ff-concepts/adding-customization/cloud-functions.md) are still being worked on and may come in future updates.
195
195
- Creation of [**AI Agents**](../../ff-integrations/AI/ai-agents.md) is not yet supported in the Library project
0 commit comments