Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
21 commits
Select commit Hold shift + click to select a range
02951a2
docs: add ai component docs and update ai quickstart
ugur-vaadin Feb 24, 2026
a90df07
Update articles/building-apps/ai/quickstart-guide.adoc
ugur-vaadin Feb 25, 2026
f4eaaa4
docs: remove scroller use from quick start example
ugur-vaadin Feb 25, 2026
fe8c5e3
fix: add import ts files
ugur-vaadin Feb 25, 2026
44d782a
docs: add preview banner to ai quickstart
ugur-vaadin Feb 25, 2026
6a70726
docs: set width full for quickstart ui components
ugur-vaadin Feb 25, 2026
3722525
docs: fix ai component interface section headers
ugur-vaadin Feb 25, 2026
50e6e51
chore: remove unused dependency
ugur-vaadin Feb 25, 2026
0da2d90
docs: show examples by default
ugur-vaadin Feb 25, 2026
1ba496a
Update articles/components/ai-components/index.adoc
ugur-vaadin Feb 25, 2026
a646024
fix: fix ai example imports
ugur-vaadin Feb 25, 2026
360851d
docs: clarify llm provider examples
ugur-vaadin Feb 26, 2026
a5dc8c1
docs: move ai components to flow reference
ugur-vaadin Feb 26, 2026
38ae41e
docs: move ai component examples to flow
ugur-vaadin Feb 26, 2026
1c92e25
docs: fix related component links
ugur-vaadin Feb 26, 2026
b817717
docs: update session persistence to include serialization
ugur-vaadin Feb 27, 2026
d141e6d
docs: rename ai page and split sections into pages
ugur-vaadin Mar 4, 2026
f966157
chore: run formatter
ugur-vaadin Mar 4, 2026
80744d2
Merge branch 'main' into docs-add-ai-component-docs-and-update-ai-qui…
ugur-vaadin Mar 4, 2026
a632f77
Merge branch 'main' into docs-add-ai-component-docs-and-update-ai-qui…
ugur-vaadin Mar 5, 2026
ed9e78c
docs: update naming and add links
ugur-vaadin Mar 6, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 5 additions & 1 deletion articles/building-apps/ai/index.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,13 @@ In this section, you'll learn how to connect a Vaadin application to a Large Lan
You'll learn how to:

* connect your application to an AI client with popular Java libraries such as Spring AI and LangChain4j,
* choose Vaadin components that create intuitive, AI-powered workflows -- such as `MessageInput`, `MessageList`, and `Scroller`, and
* use the xref:{articles}/flow/ai-support#[AI support features] to connect LLM providers to Vaadin UI components with minimal boilerplate,
* choose Vaadin components that create intuitive, AI-powered workflows -- such as `MessageInput`, `MessageList`, and `UploadManager`, and
* deliver real-time updates to users through server push.

[TIP]
The <<{articles}/flow/ai-support#,AI support features>> eliminate the boilerplate of wiring UI components to LLM frameworks. The [classname]`AIOrchestrator` handles streaming, conversation history, file attachments, and tool calling behind a simple builder API. See the <<{articles}/flow/ai-support#,documentation>> for the full API reference.

section_outline::[]

[NOTE]
Expand Down
172 changes: 48 additions & 124 deletions articles/building-apps/ai/quickstart-guide.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,14 @@
description: A compact chat view with streaming, correct scrolling, and message context.
meta-description: Hands-on tutorial - connect Vaadin to an LLM with Spring, build a streaming chat UI, and apply simple, reusable patterns for prompts, memory, and UX.
order: 20
section-nav: badge-flow
section-nav: badge-preview badge-flow
---
= [since:com.vaadin:vaadin@V25.1]#Quick Start-Guide: Add an AI Chat Bot to a Vaadin + Spring Boot Application# [badge-flow]#Flow#

:preview-banner-content: This guide uses preview AI support features. This means that they are not yet ready for production usage and may have limitations or bugs. We encourage you to try them out and provide feedback to help us improve them.
include::{articles}/_preview-banner.adoc[opts=optional]

= Quick Start-Guide: Add an AI Chat Bot to a Vaadin + Spring Boot Application [badge-flow]#Flow#

This guide shows how to connect a Large Language Model (LLM) into a Vaadin application using Spring AI and Spring Boot. You'll build a minimal chat UI with Vaadin provided components **MessageList** and **MessageInput**, stream responses token-by-token, and keep a conversational tone in the dialog with the AI.
This guide shows how to connect a Large Language Model (LLM) into a Vaadin application using Spring AI, Spring Boot, and the <<{articles}/flow/ai-support#,AI support features>>. You'll build a minimal chat UI with **MessageList** and **MessageInput**, stream responses token-by-token, and keep a conversational tone in the dialog with the AI -- all without writing boilerplate wiring code.

image::images/chatbot-image.png[role=text-center]

Expand All @@ -19,9 +20,6 @@

== Prerequisites

* Java 17+
* Spring Boot 3.5+ (or newer)
* Vaadin 24.8+
* An OpenAI API key (`OPENAI_API_KEY`)


Expand All @@ -48,7 +46,7 @@
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-bom</artifactId>
<version>1.0.1</version><!-- use the latest stable -->
<version>2.0.0-M2</version><!-- use the latest stable -->
<type>pom</type>
<scope>import</scope>
</dependency>
Expand Down Expand Up @@ -110,160 +108,87 @@
----


== 5. Create the Chat service (Spring AI)

Create a new class called **ChatService** and annotate it with `@Service`. This service builds a `ChatClient` with a **ChatMemory** advisor in the constructor and exposes a **reactive stream** of tokens.

[source,java]
----
// src/main/java/org/vaadin/example/ChatService.java
package org.vaadin.example;

import org.springframework.ai.chat.client.ChatClient;
import org.springframework.ai.chat.client.advisor.MessageChatMemoryAdvisor;
import org.springframework.ai.chat.memory.ChatMemory;
import org.springframework.stereotype.Service;
import reactor.core.publisher.Flux;

@Service
public class ChatService {

private final ChatClient chatClient;

public ChatService(ChatClient.Builder chatClientBuilder,
ChatMemory chatMemory) {
// Add a memory advisor to the chat client
var chatMemoryAdvisor = MessageChatMemoryAdvisor
.builder(chatMemory)
.build();

// Build the chat client
chatClient = chatClientBuilder
.defaultAdvisors(chatMemoryAdvisor)
.build();
}

public Flux<String> chatStream(String userInput, String chatId) {
return chatClient.prompt()
.advisors(advisorSpec ->
advisorSpec.param(ChatMemory.CONVERSATION_ID, chatId)
)
.user(userInput)
.stream()
.content();
}
}

----

Why a chat memory? **ChatMemory** keeps context of the conversations so users don't have to repeat themselves. The `chatId` keeps the context for a specific chat and doesn't share it with other chats and users.

== 5. Build the Chat UI with the AI Orchestrator

== 6. Build the Chat UI with Vaadin
The <<{articles}/flow/ai-support#,AI Orchestrator>> connects your UI components to the LLM. It handles message display, token streaming, conversation memory, and UI updates automatically. You don't need to write a separate service class -- the orchestrator manages the Spring AI integration directly.

Use `MessageList` to render the conversation as Markdown and `MessageInput` to handle the user prompts. Wrap the list in a `Scroller` so long chats don't grow the layout beyond the browser window:
Use `MessageList` to render the conversation and `MessageInput` for user prompts. Then wire everything together with the orchestrator's builder:

Check failure on line 115 in articles/building-apps/ai/quickstart-guide.adoc

View workflow job for this annotation

GitHub Actions / lint

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'orchestrator's'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'orchestrator's'?", "location": {"path": "articles/building-apps/ai/quickstart-guide.adoc", "range": {"start": {"line": 115, "column": 122}}}, "severity": "ERROR"}

[source,java]
----
// src/main/java/org/vaadin/example/MainView.java
package com.example.application.views.chatbot;
package org.vaadin.example;

import com.example.application.services.ChatService;
import com.vaadin.flow.component.Composite;
import com.vaadin.flow.component.ai.orchestrator.AIOrchestrator;
import com.vaadin.flow.component.ai.provider.LLMProvider;
import com.vaadin.flow.component.messages.MessageInput;
import com.vaadin.flow.component.messages.MessageList;
import com.vaadin.flow.component.messages.MessageListItem;
import com.vaadin.flow.component.orderedlayout.Scroller;
import com.vaadin.flow.component.orderedlayout.VerticalLayout;
import com.vaadin.flow.router.Menu;
import com.vaadin.flow.router.PageTitle;
import com.vaadin.flow.router.Route;
import com.vaadin.flow.router.RouteAlias;
import org.springframework.ai.chat.model.ChatModel;
import org.vaadin.lineawesome.LineAwesomeIconUrl;

import java.time.Instant;
import java.util.UUID;

@PageTitle("Chat Bot")
@Route("")
@RouteAlias("chat-bot")
@Menu(order = 0, icon = LineAwesomeIconUrl.ROBOT_SOLID)
public class ChatBotView extends Composite<VerticalLayout> {

private final ChatService chatService;
private final MessageList messageList;
private final String chatId = UUID.randomUUID().toString();
public class MainView extends Composite<VerticalLayout> {

public ChatBotView(ChatService chatService) {
this.chatService = chatService;

//Create a scrolling MessageList
messageList = new MessageList();
var scroller = new Scroller(messageList);
scroller.setHeightFull();
getContent().addAndExpand(scroller);

//create a MessageInput and set a submit-listener
public MainView(ChatModel chatModel) {
// Create UI components
var messageList = new MessageList();
messageList.setSizeFull();
var messageInput = new MessageInput();
messageInput.addSubmitListener(this::onSubmit);
messageInput.setWidthFull();

getContent().add(messageInput);
}
// Create the LLM provider
var provider = LLMProvider.from(chatModel);

private void onSubmit(MessageInput.SubmitEvent submitEvent) {
//create and handle a prompt message
var promptMessage = new MessageListItem(submitEvent.getValue(), Instant.now(), "User");
promptMessage.setUserColorIndex(0);
messageList.addItem(promptMessage);

//create and handle the response message
var responseMessage = new MessageListItem("", Instant.now(), "Bot");
responseMessage.setUserColorIndex(1);
messageList.addItem(responseMessage);

//append a response message to the existing UI
var userPrompt = submitEvent.getValue();
var uiOptional = submitEvent.getSource().getUI();
var ui = uiOptional.orElse(null); //implementation via ifPresent also possible

if (ui != null) {
chatService.chatStream(userPrompt, chatId)
.subscribe(token ->
ui.access(() ->
responseMessage.appendText(token)));
}
// Wire everything together
AIOrchestrator.builder(provider,
"You are a helpful assistant.")
.withMessageList(messageList)
.withInput(messageInput)
.build();

// Add UI components to the layout
getContent().addAndExpand(messageList);
getContent().add(messageInput);
}
}

----

**Key UI patterns used here:**
The orchestrator takes care of:

* **Dialog character:** display prompts and responses separately so the difference remains visible.
* **Streaming output:** show tokens as they arrive for perceived performance.
* **Markdown rendering:** richer answers (lists, code blocks, emojis).
* **Sticky scroll:** keep the latest answer in view.
* **Displaying messages:** user prompts and assistant responses appear in the Message List automatically.
* **Streaming output:** tokens are pushed to the UI as they arrive from the LLM.
* **Conversation memory:** the provider maintains a 30-message context window, so the assistant remembers earlier messages.
* **Markdown rendering:** responses render as rich text (lists, code blocks, links).
* **Sticky scroll:** the Message List keeps the latest answer in view.


== 7. Run & Iterate
== 6. Run & Iterate

Start the application, open the browser, and try your first prompts.


== What You Built

* A production-ready **chat bot** using Vaadin components
* A production-ready **chat bot** using Vaadin's AI support features
* **Token-by-token streaming** with Vaadin Push
* **Conversation memory** via Spring AI advisors
* **Conversation memory** managed by the LLM provider


== Next Possible Steps

* Add a **system prompt** field to steer the assistant (e.g., tone, persona).
* Add **clear chat** and **export** actions.
* Add **feedback** to evaluate responses
* Support **attachments** and **tool calls** (retrieval, functions).
* Customize the **system prompt** to steer the assistant (e.g., tone, persona).
* Add **file attachments** with `UploadManager` via <<{articles}/flow/ai-support/file-attachments#,`withFileReceiver()`>>.
* Support **tool calls** via <<{articles}/flow/ai-support/tool-calling#,`withTools()`>>.
* **Persist conversation history** via <<{articles}/flow/ai-support/conversation-history#,`ResponseCompleteListener`>>.
* Log prompts/responses for observability.


Expand All @@ -275,10 +200,9 @@

== Complete File List Recap

* `src/main/java/org/vaadin/example/Application.java` — Spring Boot + `@Push`
* `src/main/java/org/vaadin/example/ChatService.java` — Spring AI client + memory
* `src/main/java/org/vaadin/example/MainView.java` — Vaadin chat UI
* `src/main/resources/application.properties` — OpenAI config
* `pom.xml` — Vaadin + Spring AI dependencies
* `src/main/java/org/vaadin/example/Application.java` -- Spring Boot + `@Push`
* `src/main/java/org/vaadin/example/MainView.java` -- AI Orchestrator + Vaadin chat UI
* `src/main/resources/application.properties` -- OpenAI config
* `pom.xml` -- Vaadin + Spring AI dependencies

That's it your Vaadin application now speaks AI. 🚀
That's it -- your Vaadin application now speaks AI.
2 changes: 2 additions & 0 deletions articles/components/message-list/index.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -113,6 +113,8 @@ You can listen for attachment click events to handle downloads or other actions.

== Usage for AI Chats

For a higher-level approach that eliminates boilerplate wiring, see the <<{articles}/flow/ai-support#,AI support features>>, which connect Message List and Message Input to an LLM provider automatically.

Combine Message List with Message Input to create effective AI chat interfaces. Build your AI chat interface with:

* Message List with Markdown formatting to display conversation history
Expand Down
54 changes: 54 additions & 0 deletions articles/flow/ai-support/component-interfaces.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
---
title: Component Interfaces
description: AI interface contracts for input, message list, message, and file receiver components used by the AIOrchestrator.

Check failure on line 3 in articles/flow/ai-support/component-interfaces.adoc

View workflow job for this annotation

GitHub Actions / lint

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'AIOrchestrator'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'AIOrchestrator'?", "location": {"path": "articles/flow/ai-support/component-interfaces.adoc", "range": {"start": {"line": 3, "column": 112}}}, "severity": "ERROR"}
meta-description: Learn about AIInput, AIMessageList, AIMessage, and AIFileReceiver interfaces that define contracts for UI components used by the Vaadin AIOrchestrator.

Check failure on line 4 in articles/flow/ai-support/component-interfaces.adoc

View workflow job for this annotation

GitHub Actions / lint

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'AIMessage'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'AIMessage'?", "location": {"path": "articles/flow/ai-support/component-interfaces.adoc", "range": {"start": {"line": 4, "column": 55}}}, "severity": "ERROR"}

Check failure on line 4 in articles/flow/ai-support/component-interfaces.adoc

View workflow job for this annotation

GitHub Actions / lint

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'AIInput'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'AIInput'?", "location": {"path": "articles/flow/ai-support/component-interfaces.adoc", "range": {"start": {"line": 4, "column": 31}}}, "severity": "ERROR"}
order: 10
---


= Component Interfaces

ifdef::flow[]

The orchestrator's builder accepts both standard Vaadin components and custom implementations of the AI interfaces. This allows you to swap in custom UI components without changing the orchestrator wiring.


== AI Input

[classname]`AIInput` defines the contract for text input components. It has a single method:

* [methodname]`addSubmitListener(SerializableConsumer<String>)` -- registers a listener that receives the submitted text.

The builder accepts either a <<{articles}/components/message-input#,[classname]`MessageInput`>> directly or any [classname]`AIInput` implementation.


== AI Message List

[classname]`AIMessageList` defines the contract for displaying messages. Key methods:

* [methodname]`addMessage(String text, String userName, List<AIAttachment> attachments)` -- creates and adds a message, returning an [classname]`AIMessage` handle.
* [methodname]`addAttachmentClickListener(AttachmentClickCallback)` -- registers a handler for attachment click events.

The builder accepts either a <<{articles}/components/message-list#,[classname]`MessageList`>> directly or any [classname]`AIMessageList` implementation.

== AI Message

[classname]`AIMessage` represents a single message. It is returned by [methodname]`AIMessageList.addMessage()` and supports:

* [methodname]`getText()` / [methodname]`setText(String)` -- read or replace the message text.
* [methodname]`appendText(String)` -- append a token during streaming. The orchestrator calls this as tokens arrive from the LLM.
* [methodname]`getTime()` / [methodname]`setTime(Instant)` -- message timestamp.
* [methodname]`getUserName()` -- the sender display name.


== AI File Receiver

[classname]`AIFileReceiver` defines the contract for file upload components. It has a single method:

* [methodname]`takeAttachments()` -- returns all pending attachments and clears the internal state. The orchestrator calls this when the user submits a message.

The builder accepts <<{articles}/components/upload#,[classname]`UploadManager`>>, <<{articles}/components/upload#,[classname]`Upload`>>, or any [classname]`AIFileReceiver` implementation.

The orchestrator installs its own in-memory upload handler on <<{articles}/components/upload#,[classname]`UploadManager`>> or <<{articles}/components/upload#,[classname]`Upload`>>. The component must not have an upload handler already set.

endif::flow[]
Loading