You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
AI is becoming an intrinsic part of software development. It can help us write, test and debug code. We can also infuse AI models directly into our applications. Inversely, we can also create functions/tools that can be called by AI agents to augment their capabilities and knowledge.
4
+
5
+
Quarkus supports a few different ways to work with AI, mainly leveraging the LangChain4j extension. There are also other extensions such as the Quarkus MCP server which allows you to serve tools to be consumed by AI agents.
6
+
7
+
In this chapter, we'll explore how to work with AI models. We'll cover:
8
+
* Prompting AI models in your applications
9
+
* Preserving state between calls
10
+
* Creating Tools for use by AI Agents
11
+
* Embedding Documents that can be queried by LLMs
12
+
* Building a chatbot
13
+
* Working with local models (using Podman Desktop AI Lab)
Copy file name to clipboardExpand all lines: documentation/modules/ROOT/pages/17_prompts.adoc
+16-5Lines changed: 16 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,11 +4,17 @@
4
4
5
5
The Quarkus LangChain4j extension seamlessly integrates Large Language Models (LLMs) into Quarkus applications. LLMs are AI-based systems designed to understand, generate, and manipulate human language, showcasing advanced natural language processing capabilities. Thanks to this extension, we can enable the harnessing of LLM capabilities for the development of more intelligent applications.
6
6
7
-
In this first chapter, we'll explore the simplest of interactions with an LLM: Prompting. It essentially means just asking questions to an LLM and receiving an answer in natural language from a given Model, such as OpenAI, Mistral, Hugging Face, Ollama, etc.
7
+
In this first chapter, we'll explore the simplest of interactions with an LLM: Prompting. It essentially means just asking questions to an LLM and receiving an answer in natural language from a given model, such as ChatGPT, Granite, Mistral, etc.
8
8
9
9
10
10
== Creating a Quarkus & LangChain4j Application
11
11
12
+
We're going to use the langchain4j-openai extension for our first interaction with models.
13
+
The openai extension supports models that expose the open sourced OpenAI API specification.
14
+
Several models and model providers expose this API specification. If you want to use
15
+
a different API spec, then you can likely find a supported extension in the https://docs.quarkiverse.io/quarkus-langchain4j/dev/llms.html[Quarkus documentation].
Copy file name to clipboardExpand all lines: documentation/modules/ROOT/pages/18_chains_memory.adoc
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ In this section, we'll cover how we can achieve this with the LangChain4j extens
8
8
9
9
== Create an AI service with memory
10
10
11
-
Let's create an interface for our AI service, but with memory feature this time.
11
+
Let's create an interface for our AI service, but with memory this time.
12
12
13
13
Create a new `AssistantWithMemory` Java interface in `src/main/java` in the `com.redhat.developers` package with the following contents:
14
14
@@ -255,4 +255,4 @@ The result will be at your Quarkus terminal. An example of output (it can vary o
255
255
------------------------------------------
256
256
----
257
257
258
-
NOTE: Take a close look at the IDs of our calls to the assistant. Do you notice that the last question was in fact directed to Klaus with ID=1? We were indeed able to maintain 2 separate and concurrent conversations with the LLM!
258
+
NOTE: Take a close look at the IDs of our calls to the assistant. Do you notice that the last question was in fact directed to Klaus with ID=1? We were indeed able to maintain 2 separate and concurrent conversations with the LLM.
Copy file name to clipboardExpand all lines: documentation/modules/ROOT/pages/19_agents_tools.adoc
+34-11Lines changed: 34 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ You can read more about this in the https://docs.quarkiverse.io/quarkus-langchai
12
12
13
13
== Add the Mailer and Mailpit extensions
14
14
15
-
Open a new terminal window, and make sure you’re at the root of your `{project-ai-name}` project, then run:
15
+
Open a new terminal window, and make sure you’re at the root of your `{project-ai-name}` project, then run the following command to add emailing capabilities to our application:
16
16
17
17
[tabs]
18
18
====
@@ -79,7 +79,8 @@ public class EmailService {
79
79
Let's create an interface for our AI service, but with `SystemMessage` and `UserMessage` this time.
80
80
`SystemMessage` gives context to the AI Model.
81
81
In this case, we tell it that it should craft a message as if it is written by a professional poet.
82
-
The `UserMessage` is the actual instruction/question we're sending to the AI model. As you can see in the example below,
82
+
The `UserMessage` is the actual instruction/question we're sending to the AI model.
83
+
As you can see in the example below,
83
84
you can format and parameterize the `UserMessage`, translating structured content to text and vice-versa.
84
85
85
86
Create a new `AssistantWithContext` Java interface in `src/main/java` in the `com.redhat.developers` package with the following contents:
@@ -143,21 +144,41 @@ public class EmailMeAPoemResource {
143
144
}
144
145
----
145
146
146
-
== Adding email service properties to your configuration
147
+
== Modify application.properties to use the email Tools
148
+
149
+
Tool calling is not supported with the OpenAI `demo` key so we will need to
150
+
either use a real API key, or use a local model that supports tools..
151
+
If you want to use OpenAI's ChatGPT, you can create and fund an account at https://platform.openai.com/[OpenAI] and then set the openai-api-key to your key.
152
+
153
+
We will use a local (free) open source model served with Ollama instead.
154
+
To do this, you will need to https://ollama.com/download[download and install Ollama].
155
+
Once that's done, you will need to https://ollama.com/search?c=tools[download a model that supports tool calling], such as `granite3.1-dense:2b`. To do so, execute the command:
156
+
157
+
[#quarkuspdb-dl-ollama]
158
+
[.console-input]
159
+
[source,config,subs="+macros,+attributes"]
160
+
----
161
+
ollama pull granite3.1-dense:2b
162
+
----
147
163
148
164
Update the following properties in your `application.properties`
149
165
150
-
IMPORTANT: The LangChain4j `demo` key currently does not support tools, so you will need to use a real OpenAI key for the email service to be called by the OpenAI model.
151
-
You can create an account over at https://platform.openai.com/[OpenAI] if you'd like to see this in action.
152
-
Note that OpenAI requires you to fund your account with credits to be able to use the API. The minimum is $5 but this amount will go a long way to test the scenarios in this tutorial.
166
+
NOTE: If you do not want to go through the trouble of creating an OpenAI account or install Ollama, you can still test the below scenario, it just won't send an email since the "Tool" functionality unfortunately won't work.
153
167
154
-
NOTE: If you do not want to create an OpenAI key, you can still test the below scenario, it just won't send an email since the "Tool" functionality unfortunately won't work.
Because we haven't configured the local email service, Quarkus will use Dev Services to instantiate and configure a local email service for you (in dev mode only!).
190
+
Make sure your Quarkus Dev mode is still running. It should have reloaded with the new configuration.
191
+
192
+
Because we haven't configured the local email service, Quarkus will also have started a Dev Service to instantiate and configure a local email service for you (in dev mode only!).
170
193
171
194
You can check it running:
172
195
@@ -200,15 +223,15 @@ You can also run the following command:
200
223
curl localhost:8080/email-me-a-poem
201
224
----
202
225
203
-
An example of output (it can vary on each prompt execution):
226
+
An example of output (will vary on each prompt execution):
204
227
205
228
[.console-output]
206
229
[source,text]
207
230
----
208
231
I have composed a poem about Quarkus. I have sent it to you via email. Let me know if you need anything else
209
232
----
210
233
211
-
If you have a valid OpenAI key configured, you can check the "real" email:
234
+
If you have a tool calling model configured, you can check your inbox for the actual email:
212
235
213
236
First, open the http://localhost:8080/q/dev-ui[DevUI, window=_blank] and click on the Mailpit arrow.
NOTE: If you don't provide an actual OpenAI key you will still be able to go through this exercise but the "Tools" functions won't be called, resulting in unexpected answers.
82
+
NOTE: If you don't provide a model that supports embeddings and tools you will still be able to go through this exercise but the "Tools" functions won't be called, resulting in unexpected answers. See the previous "Agents and Tools" chapter for more information.
75
83
76
84
Let's provide a document containing the service's terms of use:
Copy file name to clipboardExpand all lines: documentation/modules/ROOT/pages/21_podman_ai.adoc
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,15 +2,15 @@
2
2
3
3
:project-podman-ai-name: quarkus-podman-ai-app
4
4
5
-
Throughout this tutorial, we've been working with OpenAI's remote models, however wouldn't it be nice if we could work
6
-
with models on our local machine (without incurring costs)?
5
+
Throughout this tutorial, we've been working with OpenAI's remote models, or Ollama's models on our local machine, however wouldn't it be nice if we could work
6
+
with models on our local machine (without incurring costs) AND have a nice visualization of what's going on?
7
7
8
8
Podman Desktop is a GUI tool that helps with running and managing containers on our local machine, but it can also help with running AI models locally as well thanks to its AI Lab extension. Thanks to Quarkus and LangChain4j, it then becomes trivial to start developing with these models. Let's find out how!
9
9
10
10
11
11
== Installing Podman Desktop AI
12
12
13
-
First, if you haven't yet, you must download and install Podman Desktop on your operating system. https://podman-desktop.io/downloads[The instructions can be found here, window="_blank"].
13
+
First, if you haven't yet, download and install Podman Desktop on your operating system. https://podman-desktop.io/downloads[The instructions can be found here, window="_blank"].
14
14
15
15
NOTE: For Windows/macOS users, if you can, give the Podman machine at least 8GB of memory and 4 CPUs (Generative AI Models are resource hungry!). The model can run with less resources, but it will be significantly slower.
0 commit comments