Skip to content

mcorbin-ibm/quarkus-ai-getting-started

 
 

Repository files navigation

Creating your first AI Java application with Quarkus and LangChain4j

You will learn how to create a simple RESTful Java AI application that asks a large language model (LLM, or genAI) to write a short poem based on a topic provided by the user. The service responds to GET requests made to the http://localhost:8080/poems/{topic}/{lines} URL. The user enters the topic and length of the desired poem; for example http://localhost:8080/poems/purple/5, to generate a poem such as:

In twilight blooms a regal hue,
The whispers of the evening sky,
Lavender dreams in softest dew,
A canvas where the shadows lie,
In purple’s embrace, we learn to fly.

You will create a Java class and a Java interface. The class represents a resource which defines the application’s endpoint and calls the AI model by using the interface to implement an AI service. The AI service uses the parameter values passed in through the endpoint to build a prompt, a natural language text request, and sends it to the AI.

The AI (the large language model, or LLM) parses the prompt and returns a poem according to the topic and number of lines requested in the prompt. LLMs are AI models that are trained to generate output based on the natural language requests they receive. The input and output of LLMs are usually text, as in this application, but some LLMs specialise in other formats such as images or video.

Architecture

Much of the work needed to build the prompt and to connect to the LLM in order to send the request and get a response is handled for you by LangChain4j, an open source extension to Quarkus.

Creating the AI service

The AI service provides an abstraction layer to make it easier to write a Java application that interacts with an LLM. The AI service composes the prompts to send to the LLM and receives the responses from the LLM. The application needs only minimal configuration to connect to the LLM because the AI service handles the connection details. AI services can manage other information too, including chat memory and toolboxes, which will be explored in other guides.

The AI service composes the prompt from two pieces of information in the class that implements the AI service:

  • the system message, which provides context to the request that the application sends to the LLM. For example, you can set the role or persona of the LLM and guide the LLM’s behavior when responding.

  • the user message, which represents the user’s latest input to send to the LLM. The user message is usually received, and processed, by the LLM after the system message.

Create the AiPoemService interface at src/main/java/org/acme/AiPoemService.java:

link:src/main/java/org/acme/AiPoemService.java[role=include]
  1. Implements the interface as an AI service which can connect to the LLM that is configured in the resources/application.properties file.

  2. Instructs the LLM to take the role of a professional poet and to display the generated poem in well-formed HTML with line breaks so that it renders neatly when viewed in a web browser.

  3. Asks the LLM to generate a poem on the topic and of the length that the user has chosen. The user’s choices are passed as parameters from the endpoint, {topic} and {lines}, to complete the templated user message placeholders, {poemTopic} and {poemLines}.

  4. Starts an exchange between the application and the AI service. The AI service composes a prompt including the system message and the user message and sends it to the LLM. The writeAPoem() method is called by the showMeAPoem() method in the Poems class, passing in the user’s chosen topic and length from the endpoint parameters.

Creating the RESTful resource

The RESTful resource defines the endpoint of your RESTful service. When a GET request is made to the endpoint, the showMeAPoem() method runs and calls the writeAPoem() method in the AI service to send a request to the LLM.

In this application, the resource class defines the endpoint that receives the user’s input (choice of topic and number of lines in the poem) and then passes it to the AI service to include in its request to the LLM.

Create the Poems class at src/main/java/org/acme/Poems.java:

link:src/main/java/org/acme/Poems.java[role=include]
  1. Implements the AiPoemService interface as aiPoemService.

  2. Defines the RESTful endpoint that takes the user’s input as endpoint parameters, {topic} and {lines}.

  3. Declares the showMeAPoem() method which takes two arguments, userTopic and userLines (because there is more than one argument, you must explicitly annotate each parameter). When a GET request is made to the /poems/{topic}/{lines} endpoint, the values of the {topic} and {lines} parameters are passed as the writeAPoem() method’s userTopic and userLines arguments.

  4. Calls the AI service’s writeAPoem() method with the values received from the endpoint. Calling the writeAPoem() method causes these values to be added to the user message as part of the prompt that the AI service sends to the LLM. The response from the LLM is then displayed in HTML…​hopefully.

Configuring the application

Connecting to an LLM is greatly simplified by using LangChain4j. For this application, Quarkus uses the Quarkus LangChain4j OpenAI extension, which is configured in the pom.xml. You then need only set the API key and the base URL properties for the LLM in src/main/resources/application.properties; in this case, you can use the value demo to get limited demo access to the LLM which is sufficient for this application:

link:src/main/resources/application.properties[role=include]

Running the application

If you have installed the Quarkus CLI (see Step 1), run the following command to start Quarkus in dev mode:

quarkus dev

Otherwise, run the following Maven command which starts the application in dev mode:

./mvnw quarkus:dev

Any changes you make to the application are automatically rebuilt and re-deployed while running in dev mode.

To test the application, request the endpoint with the values you choose replacing the template placeholders. For example, request a poem of 5 lines about purple with the URI http://localhost:8080/poems/purple/5. The HTML request in the system message means that it should display neatly in a web browser.

Alternatively, run the following curl command:

curl -w "\n" http://localhost:8080/poems/purple/5

Notice that there is a slight pause while the LLM responds, but then the application returns a short poem on the chosen topic and of the requested length.

Try alternative prompts without modifying your code

The Quarkus Dev UI (when running in dev mode) provides a chat interface where you can test alternative user messages without modifying your application code. To use the Dev UI chat interface:

  1. From the running terminal, press d to open the Quarkus Dev UI Extensions page (http://localhost:8080/q/dev-ui/extensions) in a browser. The Extensions page lists all the extensions installed in your running instance of Quarkus.

  2. In the LangChain4j Core tile, click Chat to open the Chat interface.

  3. The System message field contains the system message from your application. You can modify the system message if you want to.

  4. In the Message field, type a user message then press Send.

The application runs and returns a response based on the system and user messages entered in the chat window.

Where next?

Congratulations! You have created your first AI Java application and run it on Quarkus with LangChain4j.

Next,

…​either: take a look at the Build an AI-powered document assistant with Quarkus and LangChain4j (maybe a little bit of a jump from this very basic tutorial)

…​or: https://developer.ibm.com/tutorials/create-simple-rest-app-quarkus/ (basic intro to Quarkus but not AI)

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Java 100.0%