Skip to content

Commit 87b9fd6

Browse files
committed
various small updates
1 parent 1b69e35 commit 87b9fd6

File tree

4 files changed

+39
-27
lines changed

4 files changed

+39
-27
lines changed

content/modules/ROOT/pages/module-devhub.adoc

Lines changed: 25 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -73,9 +73,9 @@ The output should look similar to the following.
7373
[source,bash]
7474
----
7575
....
76-
--/ __ \/ / / / _ | / _ \/ //_/ / / / __/
77-
-/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \
78-
--\___\_\____/_/ |_/_/|_/_/|_|\____/___/
76+
--/ __ \/ / / / _ | / _ \/ //_/ / / / __/
77+
-/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \
78+
--\___\_\____/_/ |_/_/|_/_/|_|\____/___/
7979
INFO [io.quarkus] (Quarkus Main Thread) insurance-app 1.0.0-SNAPSHOT on JVM (powered by Quarkus xx.xx.xx) started in 19.615s. Listening on: http://0.0.0.0:8080
8080
INFO [io.quarkus] (Quarkus Main Thread) Profile dev activated. Live Coding activated.
8181
INFO [io.quarkus] (Quarkus Main Thread) Installed features: [agroal, cdi, hibernate-orm, hibernate-orm-panache, jdbc-h2, langchain4j, langchain4j-ollama, langchain4j-openai, langchain4j-websockets-next, narayana-jta, quinoa, qute, rest, rest-client, rest-client-jackson, rest-jackson, smallrye-context-propagation, vertx, websockets-next]
@@ -85,10 +85,22 @@ Tests paused
8585
Press [e] to edit command line args (currently ''), [r] to resume testing, [o] Toggle test output, [:] for the terminal, [h] for more options>
8686
----
8787

88-
Validate your local Parasol application against the production version by accessing the https://{user}-parasol-insurance-parasol-webui.{openshift_cluster_ingress_domain}[Parasol web page^].
88+
Validate your local Parasol application against the production version by accessing the https://{user}-parasol-insurance-parasol-webui.{openshift_cluster_ingress_domain}[Parasol web page^] (it may take up to a minute to come up - wait for the "Listening on..." message as shown above).
8989

9090
image::devhub/parasol_ui_web.png[]
9191

92+
==== Preview the changes you need to make
93+
94+
To add the new feature, you will either create or edit the following files. Use this list as a checklist to ensure you've made all the changes. If you get errors when you try to run the app, be sure each file was changed as described in the instructions below!
95+
96+
* `src/main/java/org/parasol/model/Email.java` - A Java record defining an incoming email from a customer
97+
* `src/main/java/org/parasol/model/EmailResponse.java` - A Java record defining a response (subject+message)
98+
* `src/main/java/org/parasol/ai/EmailService.java` - interface for interacting with the underlying LLM
99+
* `src/main/java/org/parasol/resources/EmailResource.java` - A REST-like web frontend interface
100+
* `src/main/resources/application.properties` - The Quarkus configuration where you'll define the parameters for connecting to the LLM inference service
101+
* `src/main/webui/src/app/components/EmailGenerate/EmailGenerate.tsx` - A React web component providing a simple interface for providing a customer email
102+
* `src/main/webui/src/app/routes.tsx` - A list of React routes to which you'll add the new Email generator component
103+
92104
==== Create Java records beans
93105

94106
Create a new Java record, `Email.java` in the `src/main/java/org/parasol/model` directory to carry email data in a concise and immutable way.
@@ -137,26 +149,26 @@ import dev.langchain4j.service.SystemMessage;
137149
import dev.langchain4j.service.UserMessage;
138150
import io.quarkiverse.langchain4j.RegisterAiService;
139151
140-
@RegisterAiService(modelName = "parasol-email")
152+
@RegisterAiService(modelName = "parasol-email")<1>
141153
public interface EmailService {
142-
@SystemMessage("""
154+
@SystemMessage("""<2>
143155
You are a helpful, respectful and honest assistant named "Parasol Assistant".
144156
145157
You are responding to customer emails. Provide a friendly response that is written by Parasol. The response should thank them for being a customer. Include information about when parasol insurance was founded.
146-
158+
147159
Your response must look like the following JSON:
148160
149161
{
150162
"subject": [A good one-line greeting],
151163
"message": [Your response, summarizing the information they gave and ask the customer for any follow-up information needed to file a claim]
152164
}
153165
""")
154-
EmailResponse chat(@UserMessage String claim);
166+
EmailResponse chat(@UserMessage String claim);<3>
155167
}
156168
----
157-
<1> *@RegisterAiService* annotation is pivotal for registering the AI Service, represented as a Java interface.
158-
<2> *@SystemMessage* annotation defines the scope and initial instructions, serving as the first message sent to the LLM. It delineates the AI service's role in the interaction.
159-
<3> *@UserMessage* annotation defines primary instructions dispatched to the LLM. It typically encompasses requests and the expected response format.
169+
<1> `@RegisterAiService` annotation is pivotal for registering the AI Service, represented as a Java interface.
170+
<2> `@SystemMessage` annotation defines the scope and initial instructions, serving as the first message sent to the LLM. It delineates the AI service's role in the interaction.
171+
<3> `@UserMessage` annotation defines primary instructions dispatched to the LLM. It typically encompasses requests and the expected response format.
160172

161173
==== Create Jakarta REST resource
162174

@@ -207,7 +219,7 @@ quarkus.langchain4j.openai.parasol-email.timeout=600s<3>
207219
quarkus.langchain4j.openai.parasol-email.chat-model.model-name=parasol-instruct<4>
208220
quarkus.langchain4j.openai.parasol-email.base-url=http://parasol-instruct-predictor.aiworkshop.svc.cluster.local:8080/v1<5>
209221
----
210-
<1> Specify the model provider (e.g., openai, huggingface). Note that you use the Open AI API specification when you connedt to the LLM (parasol-instruct) inference endpoint
222+
<1> Specify the model provider (e.g., openai, huggingface). Note that you use the Open AI API specification when you connect to the LLM (parasol-instruct) inference endpoint
211223
<2> Set the model temperature. Temperature is a parameter used in natural language processing models to increase or decrease the “confidence” a model has in its most likely response
212224
<3> Specify the timeout between question and response in the LLM
213225
<4> Specify the model name to connect to
@@ -313,7 +325,7 @@ This interface allows customer service representatives to copy and paste the cus
313325

314326
==== Add a new menu item in the navigation bar
315327

316-
Open the `routes.tsx` file in the `src/main/webui/src/app` directory and `uncomment` the following code out in line *8* and *89 - 95* to show the new menu item.
328+
Open the `routes.tsx` file in the `src/main/webui/src/app` directory and `uncomment` the following code out in line *8* and *89 - 95* to show the new menu item.
317329

318330
[NOTE]
319331
====

content/modules/ROOT/pages/module-ilab.adoc

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -161,7 +161,7 @@ Now that we understand the constructs of the taxonomy's knowledge, let's go ahea
161161

162162
==== Open the `instructlab` taxonomy directory in Visual Studio Code
163163

164-
You can open VSCode by following the instructions below:
164+
Open VSCode by running the below command. Even if you already have VSCode open, you should run this command to open the taxonomy folder (notice the `--reuse-window` flag).
165165

166166
[.console-input]
167167
[source,bash,subs="+attributes,macros+"]
@@ -272,6 +272,8 @@ knowledge/economy/finance/insurance/parasol/qna.yaml
272272
Taxonomy in /home/instruct/.local/share/instructlab/taxonomy is valid :)
273273
----
274274

275+
NOTE: If you see an error such as `no new line character at the end of the file`, simply place your cursor at the end of the last line of the taxonomy file and press kbd:[ENTER] to add a new line, and re-run the `ilab diff` command.
276+
275277
If you do not see output similar to above, you may not have added in all of the Q&A file. This is important as the model will use this file to generate synthetic data in the next section.
276278

277279
== Generating Synthetic Training Data & Training the New Model

content/modules/ROOT/pages/module-prompt.adoc

Lines changed: 10 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -38,9 +38,9 @@ You might choose to use prompt engineering over other techniques if you're looki
3838

3939
There are many tools and approaches available to you as the AI application developer to interface an LLM. We will briefly review a few of these and then make a recommendation for you to follow with the subsequent steps of this module.
4040

41-
Before jumping into specific tools, let's review the basics of interfacing with an LLM through a chat or agentic experience. Since LLMs can often support a wide variety of use cases and personas, it is important that the LLM receive clear, upfront guidance to define its objectives, constraints, persona, and tone. These instructions are provided in natural language format that are specified in the "System Prompt". Once a System Prompt is defined and a chat session begins, the System Prompt cannot be changed.
41+
Before jumping into specific tools, let's review the basics of interfacing with an LLM through a chat or agentic experience. Since LLMs can often support a wide variety of use cases and personas, it is important that the LLM receive clear, upfront guidance to define its objectives, constraints, persona, and tone. These instructions are provided using natural language specified in the "System Prompt". Once a System Prompt is defined and a chat session begins, the System Prompt cannot be changed.
4242

43-
Depending on use case, it may be necessary for the LLM to produce a more creative or a more predictable response to the user message. Temperature is a floating point number, usually between 0 and 1, that is used to steer the model accordingly. Lower temperature values (such as 0) are more predictable and higher values (such as 1) are more creative, although even at 0 LLMs will never product 100% repeatable responses. Many tools simply use 0.8 as a default.
43+
Depending on use case, it may be necessary for the LLM to produce a more creative or a more predictable response to the user message. Temperature is a floating point number, usually between 0 and 1, that is used to steer the model accordingly. Lower temperature values (such as 0) are more predictable and higher values (such as 1) are more creative, although even at 0 LLMs will never produce 100% repeatable responses. Many tools simply use 0.8 as a default.
4444

4545
Lastly, while experimenting with LLMs, especially with inferencing servers without a GPU, it is recommended to constrain the LLM from producing overly verbose responses by setting the Max Tokens to an appropriate threshold. This also helps coach the LLM to be more concise with its responses, which can be helpful during testing.
4646

@@ -99,7 +99,7 @@ For this section we will be exercising the model with some basic prompts to gain
9999

100100
=== Open Dev UI with LangChain4j Chat
101101

102-
Open your workspace in the Dev Spaces per the instructions in the prior section.
102+
Open your workspace in Dev Spaces per the instructions in the prior section.
103103

104104
Spawn a terminal window within the IDE by clicking on the icon with three parallel bars in the upper left corner of the screen. Choose "Terminal" and then "New Terminal" menu entry from the list.
105105

@@ -397,7 +397,7 @@ If you haven't created *a new Gen AI email service* in the previous module yet,
397397
sh ${PROJECT_SOURCE}/scripts/create-email-ai-service.sh
398398
----
399399
400-
Access the https://parasol-app-{user}-dev-parasol-app-{user}-dev.{openshift_cluster_ingress_domain}[Parasol web page^] to verify the Gen AI email service.
400+
Access the https://parasol-app-{user}-dev-parasol-app-{user}-dev.{openshift_cluster_ingress_domain}[Parasol web page^],to verify the Gen AI email service. To access the email service, click on the `Email Generate` tab on the left and use it in the following sections:
401401
====
402402

403403
==== Generate an email for `new claim #1` to the [email protected]
@@ -540,19 +540,17 @@ During a prior workshop activity, email response generation using LangChain4j wa
540540

541541
image::prompt/parasol-generate-email-response-form.png[Generate Email Response Web Form]
542542

543-
IMPORTANT: Section 5 of the Parasol AI Developer Workflow module provides steps for introducing a new email feature into the application using generative AI. The following section builds upon this feature with new capabilities. If you have not yet completed that section, you should either do so now or use the following script to automatically incorporate those changes for modification here.
544-
545543
The Parasol Insurance application is invoking the LLM using LangChain4j's AI Service framework. This approach leverages Java Interfaces created by the user with annotations that define the prompt using a String. Let's open the AI Service that was previously created for Email Response Generation:
546544

547-
`parasol-insurance/app/src/main/java/org/parasol/ai/EmailService.java`
545+
`src/main/java/org/parasol/ai/EmailService.java`
548546

549547
Now, change the current prompt to the one we created together in the previous section.
550548

551549
*Before:*
552550

553551
image::prompt/email-branch-before.png[LangChain4j Email Service Before Functionality Expansion]
554552

555-
Replace the text in the red rectangle with the folowing revised system prompt.
553+
Replace the text in the red rectangle with the following revised system prompt.
556554

557555
[.console-input]
558556
[source,text,subs="+attributes,macros+"]
@@ -621,13 +619,13 @@ Assuming your Quarkus environment is still running from the prior steps, the upd
621619

622620
- Reload the Email generate page. It take 10 - 20 seconds to recompile and apply the new prompt in the Quarkus dev mode.
623621
- Copy and paste the new claim #1 example from the prior section into the form.
624-
- Click on `Submit`.
622+
- Click on `Submit`.
625623

626624
image::prompt/new-email-generate-basic.png[New Generate Email Response]
627625

628626
Now, you'll notice that this does not look drastically different from before the enhancement. Let's now add the Forward-To Email Address to the form.
629627

630-
Open the `parasol-insurance/app/src/main/webui/src/app/components/EmailGenerate/EmailGenerate.tsx` typescirpt file for the Generate Email view.
628+
Open the `src/main/webui/src/app/components/EmailGenerate/EmailGenerate.tsx` typescript file for the Generate Email view.
631629

632630
At the top of the source file there is a data structured called EmailResponse. Add `address` of type `string` to the end of the list.
633631

@@ -653,7 +651,7 @@ image::prompt/email_response_addresse.png[email_response_addresse]
653651

654652
Additionally, we must add the address field to the REST Service's JSON Response.
655653

656-
Open the `parasol-insurance/app/src/main/java/org/parasol/model/EmailResponse.java` file to `replace` the constructor with the following code.
654+
Open the `src/main/java/org/parasol/model/EmailResponse.java` file to `replace` the constructor with the following code.
657655

658656
[.console-input]
659657
[source,java,subs="+attributes,macros+"]
@@ -663,7 +661,7 @@ public record EmailResponse(String subject, String message, String address) { }
663661

664662
image::prompt/new-field-added-to-email-response.png[Add "address" field to EmailResponse Record]
665663

666-
Now let's revisit the web form and test out the new AI-generated attribute:
664+
Now let's revisit the https://parasol-app-{user}-dev-parasol-app-{user}-dev.{openshift_cluster_ingress_domain}[web form^] and test out the new AI-generated attribute:
667665

668666
- Reload the Email generate page. It take 10 - 20 seconds to recompile and apply the new prompt in the Quarkus dev mode.
669667
- Copy and paste the new claim #1 example from the prior section into the form.

content/modules/ROOT/pages/partial-devhub-pre-req.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ Follow the next steps to create a component based on the pre-defined Software Te
6161
==== Provide Information for Application
6262

6363
* *Name*: The name of the component. Replace the *Name* with the following domain:
64-
ß
64+
6565
[.console-input]
6666
[source,bash,subs="attributes"]
6767
----

0 commit comments

Comments
 (0)