Skip to content

Commit 1ff69b8

Browse files
authored
Merge pull request #307 from FlutterFlow/feature/ai-agent
AI Agents
2 parents 576ee5e + bd3711e commit 1ff69b8

File tree

9 files changed

+186
-11
lines changed

9 files changed

+186
-11
lines changed
Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
{
2+
"label": "AI",
3+
"position": 3
4+
}

docs/ff-integrations/ai/ai-agents.md

Lines changed: 168 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,168 @@
1+
---
2+
slug: /integrations/ai-agents
3+
title: AI Agents
4+
description: Learn how to add an AI Agent in your FlutterFlow app.
5+
tags: [AI, Gemini, Integration]
6+
sidebar_position: 1
7+
keywords: [FlutterFlow, AI, Gemini, Integration, OpenAI, Anthropic, Agent Builder]
8+
---
9+
10+
# AI Agents
11+
12+
AI Agents in FlutterFlow enable you to integrate AI-powered interactions using advanced LLMs (Large Language Models) directly into your app. An AI Agent is essentially a configurable chatbot or AI-powered service defined and managed within FlutterFlow.
13+
14+
By selecting a provider (Google, OpenAI, or Anthropic), choosing the model (e.g., GPT-4, Claude, Gemini), and specifying system instructions and preloaded messages, you can create an agent to handle user input in a context-aware way.
15+
16+
Here are some examples of AI Agents:
17+
18+
- **AI Stylist:** In an e-commerce fashion app, an AI agent analyzes photos of clothing items users upload from their wardrobes and provides styling tips based on color combinations, styles, seasons, and individual preferences.
19+
- **Smart Recipe Assistant:** An AI agent in a cooking app that suggests recipes based on ingredients users have, dietary restrictions, or meal preferences, and offers interactive cooking guidance.
20+
- **AI Tutor or Educator:** A conversational agent within educational apps that helps users learn complex topics, providing step-by-step explanations, answering follow-up questions, or adapting content to the learner’s pace.
21+
22+
:::info [Prerequisite]
23+
24+
To use AI Agents in FlutterFlow, you need to [**connect your project to Firebase**](../firebase/connect-to-firebase-setup.md):
25+
26+
:::
27+
28+
## Create AI Agent
29+
30+
To create an AI agent, select the **Agents** tab from the left-side navigation menu, then click the **(+)** button. Provide a descriptive **Agent Name** (e.g., "ShoppingAssistant") and click **Create**.
31+
32+
:::info
33+
34+
You can create one AI Agent on the Standard plan and unlimited AI Agents on the Pro & Teams plans.
35+
36+
:::
37+
38+
After creating the agent, configure it using the following options:
39+
40+
#### Model Prompt
41+
42+
- **Description**: A brief explanation of what the AI agent does. Note that it is not sent to any AI models.
43+
- **System Message**: Defines the AI’s role and how it should behave when responding to users. For instance, “You are an AI fashion stylist…” tells the agent to respond like a professional stylist, focusing on outfits, colors, and suggested combinations.
44+
45+
#### Preloaded Messages
46+
47+
Preloaded messages allow you to set predefined interactions between the AI and users. It is useful for training the agent with example responses to ensure it understands the expected format of answers.
48+
49+
- **Role**: Specifies whether the message is from the **User** or the **Assistant**.
50+
- **Message**: The actual text input that either the user or assistant might send.
51+
- **Example:**
52+
- **Role = User:** "What outfit suits my medium skin tone for a sunny day?"
53+
- **Role = Assistant:** "For your medium skin tone on a sunny day, a pastel-colored top with white chinos would look fantastic! Consider adding sunglasses and comfortable footwear."
54+
55+
:::tip
56+
57+
It is always recommended to include at least one sample conversation with both a user message and an assistant response.
58+
59+
:::
60+
61+
#### Model Settings
62+
63+
- **Provider**: Allows you to select the AI vendor for this agent. Currently, we support **OpenAI**, **Google,** and **Anthropic**.
64+
- **OpenAI & Anthropic**: If you choose OpenAI or Anthropic, FlutterFlow will create a [Cloud Function](https://firebase.google.com/docs/functions) in Firebase to relay requests to the AI API securely. Hence, your Firebase project must be on a [Blaze](https://firebase.google.com/pricing) plan (paid) to deploy the necessary cloud function. **Note that** the deployed cloud function will only be accessible to authenticated users.
65+
- **Google**: When selecting Google as your provider, you need to enable the following in your Firebase project.
66+
- [**Firebase Authentication**](../authentication/firebase-auth/auth-initial-setup.md): This ensures secure interactions between users and your AI agents.
67+
- [**Vertex AI**](https://firebase.google.com/docs/vertex-ai): Vertex AI is Google's comprehensive AI platform used to manage and deploy machine learning models. FlutterFlow internally uses the [`firebase_vertexai`](https://pub.dev/packages/firebase_vertexai) package to integrate Google's AI models within your Firebase-connected project.
68+
- **Model**: Choose from the list of available models for the given provider. Models differ in capabilities, supported parameters, and cost structure.
69+
- **API Key:** Enter your provider’s API key here when using **OpenAI** or **Anthropic**. FlutterFlow securely stores this key within the deployed cloud function to ensure it remains hidden from end-users and network requests. If you're using **Google,** you won't see the API Key field, as authentication is managed through Vertex AI in your Firebase project.
70+
71+
:::tip
72+
73+
You can obtain your OpenAI API key from [**OpenAI API Keys**](https://platform.openai.com/api-keys) page and your Anthropic API key from [**Anthropic Console**](https://console.anthropic.com/settings/keys).
74+
75+
:::
76+
77+
#### Request Options
78+
79+
Here, you specify the type of inputs users can send to the AI.
80+
81+
- **Text**: Allows users to send text-based messages.
82+
- **Image**: Enables image input, allowing the agent to analyze photos.
83+
- **Audio**: (Google Agent only) Allows to send audio messages or voice inputs.
84+
- **Video**: (Google Agent only) Allows users to send short video clips to analyze.
85+
86+
Selecting multiple input types makes it easier for users to clearly communicate what they need. Instead of relying only on text descriptions, users can combine inputs—for example, uploading an image along with text to better illustrate their queries and help the agent provide more accurate responses.
87+
88+
#### Response Options
89+
90+
Defines the type of output you want from the agent. You can select from the following options:
91+
92+
- **Text**: Returns plain text responses.
93+
- **Markdown**: Allows richer formatting (headings, lists, links) if you display content as markdown. For example, An FAQ chatbot that uses formatted bullet points, and bold or italic text to highlight key info.
94+
- **JSON**: Returns structured data, which can be parsed programmatically. For example, a restaurant finder app might need structured data, e.g., `{ name: 'Pizza Palace', distance: '2.4 miles' }` to display a dynamic map.
95+
96+
#### Model Parameters
97+
98+
Here, you can fine-tune how the agent generates responses.
99+
100+
- **Temperature**: Controls how creative or random the AI’s responses can be on a scale of 0 to 1. A lower value (e.g., 0.1) makes responses more factual and consistent. A higher value (e.g., 1.0) makes responses more creative and varied (e.g., brainstorming ideas).
101+
- **Max Tokens**: Limits the total number of tokens used, including both the user's request and the agent's response. Adjusting this helps manage costs and ensures concise interactions.
102+
- **Top P**: Another technique for controlling the variety of words the AI considers. Typically kept at default unless you want fine-tuned sampling control.
103+
104+
For example, in a **Blog-Writing Assistant**, you might set a moderate to high temperature for creative phrasing and a high max tokens limit for detailed paragraphs. Conversely, a **Financial Chatbot** would benefit from a lower temperature to deliver consistent, accurate, and stable responses without unnecessary creativity.
105+
106+
Once configured, click the **Publish** button to make it live.
107+
108+
109+
<div style={{
110+
position: 'relative',
111+
paddingBottom: 'calc(56.67989417989418% + 41px)', // Keeps the aspect ratio and additional padding
112+
height: 0,
113+
width: '100%'}}>
114+
<iframe
115+
src="https://demo.arcade.software/Vi9UpfINWa0V6uXZG0v2?embed&show_copy_link=true"
116+
title=""
117+
style={{
118+
position: 'absolute',
119+
top: 0,
120+
left: 0,
121+
width: '100%',
122+
height: '100%',
123+
colorScheme: 'light'
124+
}}
125+
frameborder="0"
126+
loading="lazy"
127+
webkitAllowFullScreen
128+
mozAllowFullScreen
129+
allowFullScreen
130+
allow="clipboard-write">
131+
</iframe>
132+
</div>
133+
<p></p>
134+
135+
:::info [For non Google Agents]
136+
137+
After you successfully deploy the agent, any changes made to its configuration—such as modifying the system message, model, or temperature will require you to redeploy the Agent. For Google agents, the configuration is stored at client-side, so redeployment isn't necessary.
138+
:::
139+
140+
Now you can use the AI agent in your FlutterFlow app logic using the following actions.
141+
142+
## Send Message [Action]
143+
144+
The **Send Message** action allows your app to pass user input (such as text or images) to a selected AI Agent and receive a response. For example, you can add this action when a user taps a “Send” button after typing in a text field. The AI Agent can then reply based on its system instructions, preloaded messages, and model settings.
145+
146+
You can configure the following options for this action:
147+
148+
- **Select Agent**: Here, you select the specific AI Agent you previously configured.
149+
- **Conversation ID**: The Conversation ID is a unique identifier you assign to maintain context and continuity across multiple interactions within the same conversation. Using a consistent ID (e.g., `user123_AIStylist_202503181200`) allows the AI to remember past interactions and keep conversations coherent and contextual.
150+
- **Text Input**: This is where you specify the user's message or input text that the AI agent will process. Typically, this input comes from a widget state (e.g., TextField).
151+
- **Image Input**: If your agent supports image processing, you can provide an image.
152+
- **Audio Input**: If your agent supports audio processing, you can pass audio files.
153+
- **Video Input**: If your agent can analyze video content, provide a video file.
154+
155+
:::info
156+
- You can send media files either from [**network URL**](../../ff-concepts/file-handling/displaying-media.md#network) or a [**local device**](../../ff-concepts/file-handling/displaying-media.md#uploaded-file) storage.
157+
- For non-Google agents, we only support network URLs for now. To pass media files from your device, [**upload it first to cloud storage**](../../ff-concepts/file-handling/uploading-files.md#upload-or-save-media-action) and then provide its generated URL.
158+
:::
159+
160+
- **Action Output Variable Name**: This field stores the AI agent's response to let you display the response to users or process it further.
161+
162+
![ai-agent-send-message-action.avif](imgs/ai-agent-send-message-action.avif)
163+
164+
## Clear Chat History [Action]
165+
166+
The **Clear Chat History** action allows you to clear the remembered context. It takes the **Conversation ID** and stops referencing the existing thread ID when you next send a message. For example, you can add this action on the refresh button inside the chat to manually reset a conversation and start a fresh one with a new context.
167+
168+
![ai-agent-reset-action.avif](imgs/ai-agent-reset-action.avif)

docs/ff-integrations/gemini/gemini.md renamed to docs/ff-integrations/ai/gemini.md

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,8 @@
22
slug: /integrations/gemini
33
title: Gemini
44
description: Learn how to get started with the Gemini action in your FlutterFlow app to generate text, process text-and-image inputs, and count tokens.
5-
tags: [Gemini, Text Generation, Token Counting, Integration]
6-
sidebar_position: 1
5+
tags: [AI, Gemini, Integration]
6+
sidebar_position: 2
77
keywords: [FlutterFlow, Gemini, Text Generation, Token Counting, Integration]
88
---
99

@@ -12,6 +12,11 @@ keywords: [FlutterFlow, Gemini, Text Generation, Token Counting, Integration]
1212

1313
With the Gemini action, you can generate text, process text-and-image inputs, and effortlessly count tokens.
1414

15+
:::warning[Deprecation Notice]
16+
The Gemini action will eventually be deprecated. We recommend transitioning to the newer and more powerful [**AI Agent**](ai-agents.md) actions.
17+
:::
18+
19+
1520
<div class="video-container"><iframe src="https://www.loom.
1621
com/embed/1e7a383897334f6da96c58639e7abcfc?sid=b8363cff-ccfb-4ade-98fc-22a2a587e68e" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></div>
1722

Binary file not shown.
Binary file not shown.

docs/ff-integrations/gemini/_category_.json

Lines changed: 0 additions & 4 deletions
This file was deleted.

docs/intro/ff-ui/builder.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ keywords: [App Builder, FlutterFlow, UI, Design]
1313
On opening the project, you'll see the App Builder, which consists of four main sections:
1414
[Navigation Menu](#navigation-menu), [Toolbar](#toolbar), [Canvas](#canvas-area), and [Properties Panel](#properties-panel).
1515

16-
![app-builder](imgs/builder.avif)
16+
![navigation-menu.avif](imgs/navigation-menu.avif)
1717

1818
## Navigation Menu
1919

@@ -34,8 +34,9 @@ Here is a list of all the features accessible from the navigation menu:
3434
11. **Custom Functions**: Add custom functionalities, widgets, and actions.
3535
12. **Cloud Functions**: Write and deploy cloud functions for Firebase.
3636
13. **Tests**: Add automated tests.
37-
14. **Theme settings**: Customize visual appearance.
38-
15. **Settings and Integrations**: Access app-related settings and integrations.
37+
14. **Agents**: Create, configure, and manage [AI Agents](../../ff-integrations/ai/ai-agents.md) to integrate conversational AI interactions into your app.
38+
15. **Theme settings**: Customize visual appearance.
39+
16. **Settings and Integrations**: Access app-related settings and integrations.
3940

4041
## ToolBar
4142

261 KB
Binary file not shown.

docs/resources/projects/libraries.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -190,8 +190,9 @@ Once the library is imported, following resources are accessible for use:
190190
- [Custom Functions](../../ff-concepts/adding-customization/custom-functions.md), [Actions](../../resources/control-flow/functions/action-flow-editor.md), and [Widgets](../../resources/ui/widgets/intro-widgets.md)
191191
- [Assets](../../resources/projects/settings/general-settings.md#app-assets) (Note: These are not versioned)
192192

193-
:::note
194-
Pages and Firestore Collections are still being worked on and may come in future updates.
193+
:::info
194+
- [**Pages**](../../resources/ui/pages/intro-pages.md), [**Firestore Collections**](../../ff-integrations/database/cloud-firestore/creating-collections.md), and [**Cloud Functions**](../../ff-concepts/adding-customization/cloud-functions.md) are still being worked on and may come in future updates.
195+
- Creation of [**AI Agents**](../../ff-integrations/ai/ai-agents.md) is not yet supported in the Library project
195196
:::
196197

197198
It's important to note that these resources show up where they are instantiated. For example:

0 commit comments

Comments
 (0)