Skip to content

AI Agents #307

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 8 commits into from
Mar 26, 2025
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions docs/ff-integrations/AI/_category_.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
{
"label": "AI",
"position": 3
}
159 changes: 159 additions & 0 deletions docs/ff-integrations/AI/ai-agents.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,159 @@
---
slug: /integrations/ai-agents
title: AI Agents
description: Learn how to an AI Agent in your FlutterFlow app.
tags: [AI, Gemini, Integration]
sidebar_position: 1
keywords: [FlutterFlow, AI, Gemini, Integration, OpenAI, Anthropic, Agent Builder]
---

# AI Agents

AI Agents in FlutterFlow enable you to integrate AI-powered interactions using advanced LLMs (Large Language Models) directly into your app. An AI Agent is essentially a configurable chatbot or AI-powered service defined and managed within FlutterFlow.

By selecting a provider (Google, OpenAI, or Anthropic), choosing the model (e.g., GPT-4, Claude, Gemini), and specifying system instructions and preloaded messages, you can create an agent to handle user input in a context-aware way.

Here are some examples of AI Agents:

- **AI Stylist:** In an e-commerce fashion app, an AI agent analyzes photos of clothing items users upload from their wardrobes and provides styling tips based on color combinations, styles, seasons, and individual preferences.
- **Smart Recipe Assistant:** An AI agent in a cooking app that suggests recipes based on ingredients users have, dietary restrictions, or meal preferences, and offers interactive cooking guidance.
- **AI Tutor or Educator:** A conversational agent within educational apps that helps users learn complex topics, providing step-by-step explanations, answering follow-up questions, or adapting content to the learner’s pace.

:::info [Prerequisite]

To use AI Agents in FlutterFlow, you need to [**connect your project to Firebase**](../firebase/connect-to-firebase-setup.md):

:::

## Create AI Agent

To create an AI agent, select the **Agents** tab from the left-side navigation menu, then click the **(+)** button. Provide a descriptive **Agent Name** (e.g., "ShoppingAssistant") and click **Create**.

:::info

You can create one AI Agent on the Standard plan and unlimited AI Agents on the Pro & Teams plans.

:::

After creating the agent, configure it using the following options:

#### Model Prompt

- **Description**: A brief explanation of what the AI agent does. Note that it is not sent to any AI models.
- **System Message**: Defines the AI’s role and how it should behave when responding to users. For instance, “You are an AI fashion stylist…” tells the agent to respond like a professional stylist, focusing on outfits, colors, and suggested combinations.

#### Preloaded Messages

Preloaded messages allow you to set predefined interactions between the AI and users. It is useful for training the agent with example responses to ensure it understands the expected format of answers.

- **Role**: Specifies whether the message is from the **User** or the **Assistant**.
- **Message**: The actual text input that either the user or assistant might send.
- **Example:**
- **Role = User:** "What outfit suits my medium skin tone for a sunny day?"
- **Role = Assistant:** "For your medium skin tone on a sunny day, a pastel-colored top with white chinos would look fantastic! Consider adding sunglasses and comfortable footwear."

:::tip

It is always recommended to include at least one sample conversation with both a user message and an assistant response.

:::

#### Model Settings

- **Provider**: Allows you to select the AI vendor for this agent. Currently, we support **OpenAI**, **Google,** and **Anthropic**.
- **OpenAI & Anthropic**: If you choose OpenAI or Anthropic, FlutterFlow will create a [Cloud Function](https://firebase.google.com/docs/functions) in Firebase to relay requests to the AI API securely. Hence, your Firebase project must be on a [Blaze](https://firebase.google.com/pricing) plan (paid) to deploy the necessary cloud function.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add that for OpenAI and Anthropic, the deployed cloud function will only be accessible if the user calling it is authenticated? We do this in other cloud functions we deploy for the user too (like the Stripe payment one). Thanks!

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, added!

- **Google**: When selecting Google as your provider, you need to enable the following in your Firebase project.
- [**Firebase Authentication**](../authentication/firebase-auth/auth-initial-setup.md): This ensures secure interactions between users and your AI agents.
- [**Vertex AI**](https://firebase.google.com/docs/vertex-ai): Vertex AI is Google's comprehensive AI platform used to manage and deploy machine learning models. FlutterFlow internally uses the [`firebase_vertexai`](https://pub.dev/packages/firebase_vertexai) package to integrate Google's AI models within your Firebase-connected project.
- **Model**: Choose from the list of available models for the given provider. Models differ in capabilities, supported parameters, and cost structure.
- **API Key:** Enter your provider’s API key here when using **OpenAI** or **Anthropic**. FlutterFlow securely stores this key within the deployed cloud function to ensure it remains hidden from end-users and network requests. If you're using **Google,** you won't see the API Key field, as authentication is managed through Vertex AI in your Firebase project.

:::tip

You can obtain your OpenAI API key from [**OpenAI API Keys**](https://platform.openai.com/api-keys) page and your Anthropic API key from [**Anthropic Console**](https://console.anthropic.com/settings/keys).

:::

#### Request Options

Here, you specify the type of inputs users can send to the AI.

- **Text**: Allows users to send text-based messages.
- **Image**: Enables image input, allowing the AI to analyze photos.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that for the Google agent, we also accept Audio and Video.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added.


Selecting both means users can send both text and/or images. For example, with an AI Stylist agent, enabling both Text and Image allows users to either describe their outfits in words or upload images of clothing items for analysis.

#### Response Options

Defines the type of output you want from the agent. You can select from the following options:

- **Text**: Returns plain text responses.
- **Markdown**: Allows richer formatting (headings, lists, links) if you display content as markdown. For example, An FAQ chatbot that uses formatted bullet points, and bold or italic text to highlight key info.
- **Data Type (JSON)**: Returns structured data, which can be parsed programmatically. For example, a restaurant finder app might need structured data, e.g., `{ name: 'Pizza Palace', distance: '2.4 miles' }` to display a dynamic map.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we just refer to this as JSON for now? Technically it's just a JSON object. Later, we will add a 4th option (Data Type) and the JSON will be automatically parsed into the data type. But for now, it's just JSON.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated!


#### Model Parameters

Here, you can fine-tune how the agent generates responses.

- **Temperature**: Controls how creative or random the AI’s responses can be on a scale of 0 to 1. A lower value (e.g., 0.1) makes responses more factual and consistent. A higher value (e.g., 1.0) makes responses more creative and varied (e.g., brainstorming ideas).
- **Max Tokens**: Sets the response length limit—helpful for cost control or for ensuring short, direct replies.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I learned recently that the max tokens includes both the user's request AND the response. So maybe specify that here. I updated the tooltip in the product to: This is the maximum combined length of the request and response in tokens.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated!

- **Top P**: Another technique for controlling the variety of words the AI considers. Typically kept at default unless you want fine-tuned sampling control.

For example, in a **Blog-Writing Assistant**, you might set a moderate to high temperature for creative phrasing and a high max tokens limit for detailed paragraphs. Conversely, a **Financial Chatbot** would benefit from a lower temperature to deliver consistent, accurate, and stable responses without unnecessary creativity.

Once configured, click the **Publish** button to make it live.


<div style={{
position: 'relative',
paddingBottom: 'calc(56.67989417989418% + 41px)', // Keeps the aspect ratio and additional padding
height: 0,
width: '100%'}}>
<iframe
src="https://demo.arcade.software/Vi9UpfINWa0V6uXZG0v2?embed&show_copy_link=true"
title=""
style={{
position: 'absolute',
top: 0,
left: 0,
width: '100%',
height: '100%',
colorScheme: 'light'
}}
frameborder="0"
loading="lazy"
webkitAllowFullScreen
mozAllowFullScreen
allowFullScreen
allow="clipboard-write">
</iframe>
</div>
<p></p>

:::info

After you successfully deploy the agent, any changes made to its configuration—such as modifying the system message, model, or temperature will require you to redeploy the Agent.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe mention here that deploying and re-deploys are only necessary for non Google agents. For Google agents, the configuration of the agent is stored client side, so we dont need to redeploy anything.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated!


:::

Now you can use the AI agent in your FlutterFlow app logic using the following actions.

## Send Message [Action]

The **Send Message** action allows your app to pass user input (such as text or images) to a selected AI Agent and receive a response. For example, you can add this action when a user taps a “Send” button after typing in a text field. The AI Agent can then reply based on its system instructions, preloaded messages, and model settings.

You can configure the following options for this action:

- **Select Agent**: Here, you select the specific AI Agent you previously configured.
- **Conversation ID**: The Conversation ID is a unique identifier you assign to maintain context and continuity across multiple interactions within the same conversation. Using a consistent ID (e.g., `user123_AIStylist_202503181200`) allows the AI to remember past interactions and keep conversations coherent and contextual.
- **Text Input**: This is where you specify the user's message or input text that the AI agent will process. Typically, this input comes from a widget state (e.g., TextField).
- **Image Input**: If your agent supports image processing, you can provide an image either from [local device](../../ff-concepts/file-handling/displaying-media.md#uploaded-file) storage or a [network URL](../../ff-concepts/file-handling/displaying-media.md#network).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, it's tricky -- for non-Google agents, we only allow network URLs. If we allowed uploaded files, we'd have to pass base64 data to the cloud function and I found that they ran out of memory. The workaround for non-Google agents is that users can simply have users choose a local image, upload it to a bucket, and then pass the URL of that bucket.

But for Google agents, we allow both uploaded image data and network URLs.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure, updated!

- **Action Output Variable Name**: This field stores the AI agent's response to let you display the response to users or process it further.

![ai-agent-send-message-action.avif](imgs/ai-agent-send-message-action.avif)

## Clear Chat History [Action]

The **Clear Chat History** action allows you to clear the remembered context. It takes the **Conversation ID** and stops referencing the existing thread ID when you next send a message. For example, you can add this action on the refresh button inside the chat to manually reset a conversation and start a fresh one with a new context.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perfect explanation. That's all they need to know.


![ai-agent-reset-action.avif](imgs/ai-agent-reset-action.avif)
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@
slug: /integrations/gemini
title: Gemini
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are going to deprecate the Gemini action eventually. Maybe can you add some sort of callout here redirecting users to use the new AI Agent instead?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added!

description: Learn how to get started with the Gemini action in your FlutterFlow app to generate text, process text-and-image inputs, and count tokens.
tags: [Gemini, Text Generation, Token Counting, Integration]
sidebar_position: 1
tags: [AI, Gemini, Integration]
sidebar_position: 2
keywords: [FlutterFlow, Gemini, Text Generation, Token Counting, Integration]
---

Expand Down
Binary file not shown.
Binary file not shown.
4 changes: 0 additions & 4 deletions docs/ff-integrations/gemini/_category_.json

This file was deleted.

7 changes: 4 additions & 3 deletions docs/intro/ff-ui/builder.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ keywords: [App Builder, FlutterFlow, UI, Design]
On opening the project, you'll see the App Builder, which consists of four main sections:
[Navigation Menu](#navigation-menu), [Toolbar](#toolbar), [Canvas](#canvas-area), and [Properties Panel](#properties-panel).

![app-builder](imgs/builder.avif)
![navigation-menu.avif](imgs/navigation-menu.avif)

## Navigation Menu

Expand All @@ -34,8 +34,9 @@ Here is a list of all the features accessible from the navigation menu:
11. **Custom Functions**: Add custom functionalities, widgets, and actions.
12. **Cloud Functions**: Write and deploy cloud functions for Firebase.
13. **Tests**: Add automated tests.
14. **Theme settings**: Customize visual appearance.
15. **Settings and Integrations**: Access app-related settings and integrations.
14. **Agents**: Create, configure, and manage [AI Agents](../../ff-integrations/AI/ai-agents.md) to integrate conversational AI interactions into your app.
15. **Theme settings**: Customize visual appearance.
16. **Settings and Integrations**: Access app-related settings and integrations.

## ToolBar

Expand Down
Binary file added docs/intro/ff-ui/imgs/navigation-menu.avif
Binary file not shown.
3 changes: 2 additions & 1 deletion docs/resources/projects/libraries.md
Original file line number Diff line number Diff line change
Expand Up @@ -191,7 +191,8 @@ Once the library is imported, following resources are accessible for use:
- [Assets](../../resources/projects/settings/general-settings.md#app-assets) (Note: These are not versioned)

:::note
Pages and Firestore Collections are still being worked on and may come in future updates.
- Pages and Firestore Collections are still being worked on and may come in future updates.
- Creation of [**AI Agents**](../../ff-integrations/AI/ai-agents.md) is not yet supported in the Library project
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another note is that cloud functions are also not available in Library projects. I learned that recently :)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated!

:::

It's important to note that these resources show up where they are instantiated. For example:
Expand Down