Skip to content

AI Agents #307

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 8 commits into from
Mar 26, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions docs/ff-integrations/ai/_category_.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
{
"label": "AI",
"position": 3
}
168 changes: 168 additions & 0 deletions docs/ff-integrations/ai/ai-agents.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,168 @@
---
slug: /integrations/ai-agents
title: AI Agents
description: Learn how to add an AI Agent in your FlutterFlow app.
tags: [AI, Gemini, Integration]
sidebar_position: 1
keywords: [FlutterFlow, AI, Gemini, Integration, OpenAI, Anthropic, Agent Builder]
---

# AI Agents

AI Agents in FlutterFlow enable you to integrate AI-powered interactions using advanced LLMs (Large Language Models) directly into your app. An AI Agent is essentially a configurable chatbot or AI-powered service defined and managed within FlutterFlow.

By selecting a provider (Google, OpenAI, or Anthropic), choosing the model (e.g., GPT-4, Claude, Gemini), and specifying system instructions and preloaded messages, you can create an agent to handle user input in a context-aware way.

Here are some examples of AI Agents:

- **AI Stylist:** In an e-commerce fashion app, an AI agent analyzes photos of clothing items users upload from their wardrobes and provides styling tips based on color combinations, styles, seasons, and individual preferences.
- **Smart Recipe Assistant:** An AI agent in a cooking app that suggests recipes based on ingredients users have, dietary restrictions, or meal preferences, and offers interactive cooking guidance.
- **AI Tutor or Educator:** A conversational agent within educational apps that helps users learn complex topics, providing step-by-step explanations, answering follow-up questions, or adapting content to the learner’s pace.

:::info [Prerequisite]

To use AI Agents in FlutterFlow, you need to [**connect your project to Firebase**](../firebase/connect-to-firebase-setup.md):

:::

## Create AI Agent

To create an AI agent, select the **Agents** tab from the left-side navigation menu, then click the **(+)** button. Provide a descriptive **Agent Name** (e.g., "ShoppingAssistant") and click **Create**.

:::info

You can create one AI Agent on the Standard plan and unlimited AI Agents on the Pro & Teams plans.

:::

After creating the agent, configure it using the following options:

#### Model Prompt

- **Description**: A brief explanation of what the AI agent does. Note that it is not sent to any AI models.
- **System Message**: Defines the AI’s role and how it should behave when responding to users. For instance, “You are an AI fashion stylist…” tells the agent to respond like a professional stylist, focusing on outfits, colors, and suggested combinations.

#### Preloaded Messages

Preloaded messages allow you to set predefined interactions between the AI and users. It is useful for training the agent with example responses to ensure it understands the expected format of answers.

- **Role**: Specifies whether the message is from the **User** or the **Assistant**.
- **Message**: The actual text input that either the user or assistant might send.
- **Example:**
- **Role = User:** "What outfit suits my medium skin tone for a sunny day?"
- **Role = Assistant:** "For your medium skin tone on a sunny day, a pastel-colored top with white chinos would look fantastic! Consider adding sunglasses and comfortable footwear."

:::tip

It is always recommended to include at least one sample conversation with both a user message and an assistant response.

:::

#### Model Settings

- **Provider**: Allows you to select the AI vendor for this agent. Currently, we support **OpenAI**, **Google,** and **Anthropic**.
- **OpenAI & Anthropic**: If you choose OpenAI or Anthropic, FlutterFlow will create a [Cloud Function](https://firebase.google.com/docs/functions) in Firebase to relay requests to the AI API securely. Hence, your Firebase project must be on a [Blaze](https://firebase.google.com/pricing) plan (paid) to deploy the necessary cloud function. **Note that** the deployed cloud function will only be accessible to authenticated users.
- **Google**: When selecting Google as your provider, you need to enable the following in your Firebase project.
- [**Firebase Authentication**](../authentication/firebase-auth/auth-initial-setup.md): This ensures secure interactions between users and your AI agents.
- [**Vertex AI**](https://firebase.google.com/docs/vertex-ai): Vertex AI is Google's comprehensive AI platform used to manage and deploy machine learning models. FlutterFlow internally uses the [`firebase_vertexai`](https://pub.dev/packages/firebase_vertexai) package to integrate Google's AI models within your Firebase-connected project.
- **Model**: Choose from the list of available models for the given provider. Models differ in capabilities, supported parameters, and cost structure.
- **API Key:** Enter your provider’s API key here when using **OpenAI** or **Anthropic**. FlutterFlow securely stores this key within the deployed cloud function to ensure it remains hidden from end-users and network requests. If you're using **Google,** you won't see the API Key field, as authentication is managed through Vertex AI in your Firebase project.

:::tip

You can obtain your OpenAI API key from [**OpenAI API Keys**](https://platform.openai.com/api-keys) page and your Anthropic API key from [**Anthropic Console**](https://console.anthropic.com/settings/keys).

:::

#### Request Options

Here, you specify the type of inputs users can send to the AI.

- **Text**: Allows users to send text-based messages.
- **Image**: Enables image input, allowing the agent to analyze photos.
- **Audio**: (Google Agent only) Allows to send audio messages or voice inputs.
- **Video**: (Google Agent only) Allows users to send short video clips to analyze.

Selecting multiple input types makes it easier for users to clearly communicate what they need. Instead of relying only on text descriptions, users can combine inputs—for example, uploading an image along with text to better illustrate their queries and help the agent provide more accurate responses.

#### Response Options

Defines the type of output you want from the agent. You can select from the following options:

- **Text**: Returns plain text responses.
- **Markdown**: Allows richer formatting (headings, lists, links) if you display content as markdown. For example, An FAQ chatbot that uses formatted bullet points, and bold or italic text to highlight key info.
- **JSON**: Returns structured data, which can be parsed programmatically. For example, a restaurant finder app might need structured data, e.g., `{ name: 'Pizza Palace', distance: '2.4 miles' }` to display a dynamic map.

#### Model Parameters

Here, you can fine-tune how the agent generates responses.

- **Temperature**: Controls how creative or random the AI’s responses can be on a scale of 0 to 1. A lower value (e.g., 0.1) makes responses more factual and consistent. A higher value (e.g., 1.0) makes responses more creative and varied (e.g., brainstorming ideas).
- **Max Tokens**: Limits the total number of tokens used, including both the user's request and the agent's response. Adjusting this helps manage costs and ensures concise interactions.
- **Top P**: Another technique for controlling the variety of words the AI considers. Typically kept at default unless you want fine-tuned sampling control.

For example, in a **Blog-Writing Assistant**, you might set a moderate to high temperature for creative phrasing and a high max tokens limit for detailed paragraphs. Conversely, a **Financial Chatbot** would benefit from a lower temperature to deliver consistent, accurate, and stable responses without unnecessary creativity.

Once configured, click the **Publish** button to make it live.


<div style={{
position: 'relative',
paddingBottom: 'calc(56.67989417989418% + 41px)', // Keeps the aspect ratio and additional padding
height: 0,
width: '100%'}}>
<iframe
src="https://demo.arcade.software/Vi9UpfINWa0V6uXZG0v2?embed&show_copy_link=true"
title=""
style={{
position: 'absolute',
top: 0,
left: 0,
width: '100%',
height: '100%',
colorScheme: 'light'
}}
frameborder="0"
loading="lazy"
webkitAllowFullScreen
mozAllowFullScreen
allowFullScreen
allow="clipboard-write">
</iframe>
</div>
<p></p>

:::info [For non Google Agents]

After you successfully deploy the agent, any changes made to its configuration—such as modifying the system message, model, or temperature will require you to redeploy the Agent. For Google agents, the configuration is stored at client-side, so redeployment isn't necessary.
:::

Now you can use the AI agent in your FlutterFlow app logic using the following actions.

## Send Message [Action]

The **Send Message** action allows your app to pass user input (such as text or images) to a selected AI Agent and receive a response. For example, you can add this action when a user taps a “Send” button after typing in a text field. The AI Agent can then reply based on its system instructions, preloaded messages, and model settings.

You can configure the following options for this action:

- **Select Agent**: Here, you select the specific AI Agent you previously configured.
- **Conversation ID**: The Conversation ID is a unique identifier you assign to maintain context and continuity across multiple interactions within the same conversation. Using a consistent ID (e.g., `user123_AIStylist_202503181200`) allows the AI to remember past interactions and keep conversations coherent and contextual.
- **Text Input**: This is where you specify the user's message or input text that the AI agent will process. Typically, this input comes from a widget state (e.g., TextField).
- **Image Input**: If your agent supports image processing, you can provide an image.
- **Audio Input**: If your agent supports audio processing, you can pass audio files.
- **Video Input**: If your agent can analyze video content, provide a video file.

:::info
- You can send media files either from [**network URL**](../../ff-concepts/file-handling/displaying-media.md#network) or a [**local device**](../../ff-concepts/file-handling/displaying-media.md#uploaded-file) storage.
- For non-Google agents, we only support network URLs for now. To pass media files from your device, [**upload it first to cloud storage**](../../ff-concepts/file-handling/uploading-files.md#upload-or-save-media-action) and then provide its generated URL.
:::

- **Action Output Variable Name**: This field stores the AI agent's response to let you display the response to users or process it further.

![ai-agent-send-message-action.avif](imgs/ai-agent-send-message-action.avif)

## Clear Chat History [Action]

The **Clear Chat History** action allows you to clear the remembered context. It takes the **Conversation ID** and stops referencing the existing thread ID when you next send a message. For example, you can add this action on the refresh button inside the chat to manually reset a conversation and start a fresh one with a new context.

![ai-agent-reset-action.avif](imgs/ai-agent-reset-action.avif)
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@
slug: /integrations/gemini
title: Gemini
description: Learn how to get started with the Gemini action in your FlutterFlow app to generate text, process text-and-image inputs, and count tokens.
tags: [Gemini, Text Generation, Token Counting, Integration]
sidebar_position: 1
tags: [AI, Gemini, Integration]
sidebar_position: 2
keywords: [FlutterFlow, Gemini, Text Generation, Token Counting, Integration]
---

Expand All @@ -12,6 +12,11 @@ keywords: [FlutterFlow, Gemini, Text Generation, Token Counting, Integration]

With the Gemini action, you can generate text, process text-and-image inputs, and effortlessly count tokens.

:::warning[Deprecation Notice]
The Gemini action will eventually be deprecated. We recommend transitioning to the newer and more powerful [**AI Agent**](ai-agents.md) actions.
:::


<div class="video-container"><iframe src="https://www.loom.
com/embed/1e7a383897334f6da96c58639e7abcfc?sid=b8363cff-ccfb-4ade-98fc-22a2a587e68e" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></div>

Expand Down
Binary file not shown.
Binary file not shown.
4 changes: 0 additions & 4 deletions docs/ff-integrations/gemini/_category_.json

This file was deleted.

7 changes: 4 additions & 3 deletions docs/intro/ff-ui/builder.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ keywords: [App Builder, FlutterFlow, UI, Design]
On opening the project, you'll see the App Builder, which consists of four main sections:
[Navigation Menu](#navigation-menu), [Toolbar](#toolbar), [Canvas](#canvas-area), and [Properties Panel](#properties-panel).

![app-builder](imgs/builder.avif)
![navigation-menu.avif](imgs/navigation-menu.avif)

## Navigation Menu

Expand All @@ -34,8 +34,9 @@ Here is a list of all the features accessible from the navigation menu:
11. **Custom Functions**: Add custom functionalities, widgets, and actions.
12. **Cloud Functions**: Write and deploy cloud functions for Firebase.
13. **Tests**: Add automated tests.
14. **Theme settings**: Customize visual appearance.
15. **Settings and Integrations**: Access app-related settings and integrations.
14. **Agents**: Create, configure, and manage [AI Agents](../../ff-integrations/ai/ai-agents.md) to integrate conversational AI interactions into your app.
15. **Theme settings**: Customize visual appearance.
16. **Settings and Integrations**: Access app-related settings and integrations.

## ToolBar

Expand Down
Binary file added docs/intro/ff-ui/imgs/navigation-menu.avif
Binary file not shown.
5 changes: 3 additions & 2 deletions docs/resources/projects/libraries.md
Original file line number Diff line number Diff line change
Expand Up @@ -190,8 +190,9 @@ Once the library is imported, following resources are accessible for use:
- [Custom Functions](../../ff-concepts/adding-customization/custom-functions.md), [Actions](../../resources/control-flow/functions/action-flow-editor.md), and [Widgets](../../resources/ui/widgets/intro-widgets.md)
- [Assets](../../resources/projects/settings/general-settings.md#app-assets) (Note: These are not versioned)

:::note
Pages and Firestore Collections are still being worked on and may come in future updates.
:::info
- [**Pages**](../../resources/ui/pages/intro-pages.md), [**Firestore Collections**](../../ff-integrations/database/cloud-firestore/creating-collections.md), and [**Cloud Functions**](../../ff-concepts/adding-customization/cloud-functions.md) are still being worked on and may come in future updates.
- Creation of [**AI Agents**](../../ff-integrations/ai/ai-agents.md) is not yet supported in the Library project
:::

It's important to note that these resources show up where they are instantiated. For example:
Expand Down