You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/modules/ROOT/pages/module-devhub.adoc
+25-13Lines changed: 25 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -73,9 +73,9 @@ The output should look similar to the following.
73
73
[source,bash]
74
74
----
75
75
....
76
-
--/ __ \/ / / / _ | / _ \/ //_/ / / / __/
77
-
-/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \
78
-
--\___\_\____/_/ |_/_/|_/_/|_|\____/___/
76
+
--/ __ \/ / / / _ | / _ \/ //_/ / / / __/
77
+
-/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \
78
+
--\___\_\____/_/ |_/_/|_/_/|_|\____/___/
79
79
INFO [io.quarkus] (Quarkus Main Thread) insurance-app 1.0.0-SNAPSHOT on JVM (powered by Quarkus xx.xx.xx) started in 19.615s. Listening on: http://0.0.0.0:8080
80
80
INFO [io.quarkus] (Quarkus Main Thread) Profile dev activated. Live Coding activated.
Press [e] to edit command line args (currently ''), [r] to resume testing, [o] Toggle test output, [:] for the terminal, [h] for more options>
86
86
----
87
87
88
-
Validate your local Parasol application against the production version by accessing the https://{user}-parasol-insurance-parasol-webui.{openshift_cluster_ingress_domain}[Parasol web page^].
88
+
Validate your local Parasol application against the production version by accessing the https://{user}-parasol-insurance-parasol-webui.{openshift_cluster_ingress_domain}[Parasol web page^] (it may take up to a minute to come up - wait for the "Listening on..." message as shown above).
89
89
90
90
image::devhub/parasol_ui_web.png[]
91
91
92
+
==== Preview the changes you need to make
93
+
94
+
To add the new feature, you will either create or edit the following files. Use this list as a checklist to ensure you've made all the changes. If you get errors when you try to run the app, be sure each file was changed as described in the instructions below!
95
+
96
+
* `src/main/java/org/parasol/model/Email.java` - A Java record defining an incoming email from a customer
97
+
* `src/main/java/org/parasol/model/EmailResponse.java` - A Java record defining a response (subject+message)
98
+
* `src/main/java/org/parasol/ai/EmailService.java` - interface for interacting with the underlying LLM
99
+
* `src/main/java/org/parasol/resources/EmailResource.java` - A REST-like web frontend interface
100
+
* `src/main/resources/application.properties` - The Quarkus configuration where you'll define the parameters for connecting to the LLM inference service
101
+
* `src/main/webui/src/app/components/EmailGenerate/EmailGenerate.tsx` - A React web component providing a simple interface for providing a customer email
102
+
* `src/main/webui/src/app/routes.tsx` - A list of React routes to which you'll add the new Email generator component
103
+
92
104
==== Create Java records beans
93
105
94
106
Create a new Java record, `Email.java` in the `src/main/java/org/parasol/model` directory to carry email data in a concise and immutable way.
You are a helpful, respectful and honest assistant named "Parasol Assistant".
144
156
145
157
You are responding to customer emails. Provide a friendly response that is written by Parasol. The response should thank them for being a customer. Include information about when parasol insurance was founded.
146
-
158
+
147
159
Your response must look like the following JSON:
148
160
149
161
{
150
162
"subject": [A good one-line greeting],
151
163
"message": [Your response, summarizing the information they gave and ask the customer for any follow-up information needed to file a claim]
152
164
}
153
165
""")
154
-
EmailResponse chat(@UserMessage String claim);
166
+
EmailResponse chat(@UserMessage String claim);<3>
155
167
}
156
168
----
157
-
<1> *@RegisterAiService* annotation is pivotal for registering the AI Service, represented as a Java interface.
158
-
<2> *@SystemMessage* annotation defines the scope and initial instructions, serving as the first message sent to the LLM. It delineates the AI service's role in the interaction.
159
-
<3> *@UserMessage* annotation defines primary instructions dispatched to the LLM. It typically encompasses requests and the expected response format.
169
+
<1> `@RegisterAiService` annotation is pivotal for registering the AI Service, represented as a Java interface.
170
+
<2> `@SystemMessage` annotation defines the scope and initial instructions, serving as the first message sent to the LLM. It delineates the AI service's role in the interaction.
171
+
<3> `@UserMessage` annotation defines primary instructions dispatched to the LLM. It typically encompasses requests and the expected response format.
<1> Specify the model provider (e.g., openai, huggingface). Note that you use the Open AI API specification when you connedt to the LLM (parasol-instruct) inference endpoint
222
+
<1> Specify the model provider (e.g., openai, huggingface). Note that you use the Open AI API specification when you connect to the LLM (parasol-instruct) inference endpoint
211
223
<2> Set the model temperature. Temperature is a parameter used in natural language processing models to increase or decrease the “confidence” a model has in its most likely response
212
224
<3> Specify the timeout between question and response in the LLM
213
225
<4> Specify the model name to connect to
@@ -313,7 +325,7 @@ This interface allows customer service representatives to copy and paste the cus
313
325
314
326
==== Add a new menu item in the navigation bar
315
327
316
-
Open the `routes.tsx` file in the `src/main/webui/src/app` directory and `uncomment` the following code out in line *8* and *89 - 95* to show the new menu item.
328
+
Open the `routes.tsx` file in the `src/main/webui/src/app` directory and `uncomment` the following code out in line *8* and *89 - 95* to show the new menu item.
Copy file name to clipboardExpand all lines: content/modules/ROOT/pages/module-ilab.adoc
+3-1Lines changed: 3 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -161,7 +161,7 @@ Now that we understand the constructs of the taxonomy's knowledge, let's go ahea
161
161
162
162
==== Open the `instructlab` taxonomy directory in Visual Studio Code
163
163
164
-
You can open VSCode by following the instructions below:
164
+
Open VSCode by running the below command. Even if you already have VSCode open, you should run this command to open the taxonomy folder (notice the `--reuse-window` flag).
Taxonomy in /home/instruct/.local/share/instructlab/taxonomy is valid :)
273
273
----
274
274
275
+
NOTE: If you see an error such as `no new line character at the end of the file`, simply place your cursor at the end of the last line of the taxonomy file and press kbd:[ENTER] to add a new line, and re-run the `ilab diff` command.
276
+
275
277
If you do not see output similar to above, you may not have added in all of the Q&A file. This is important as the model will use this file to generate synthetic data in the next section.
276
278
277
279
== Generating Synthetic Training Data & Training the New Model
Copy file name to clipboardExpand all lines: content/modules/ROOT/pages/module-prompt.adoc
+10-12Lines changed: 10 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -38,9 +38,9 @@ You might choose to use prompt engineering over other techniques if you're looki
38
38
39
39
There are many tools and approaches available to you as the AI application developer to interface an LLM. We will briefly review a few of these and then make a recommendation for you to follow with the subsequent steps of this module.
40
40
41
-
Before jumping into specific tools, let's review the basics of interfacing with an LLM through a chat or agentic experience. Since LLMs can often support a wide variety of use cases and personas, it is important that the LLM receive clear, upfront guidance to define its objectives, constraints, persona, and tone. These instructions are provided in natural language format that are specified in the "System Prompt". Once a System Prompt is defined and a chat session begins, the System Prompt cannot be changed.
41
+
Before jumping into specific tools, let's review the basics of interfacing with an LLM through a chat or agentic experience. Since LLMs can often support a wide variety of use cases and personas, it is important that the LLM receive clear, upfront guidance to define its objectives, constraints, persona, and tone. These instructions are provided using natural language specified in the "System Prompt". Once a System Prompt is defined and a chat session begins, the System Prompt cannot be changed.
42
42
43
-
Depending on use case, it may be necessary for the LLM to produce a more creative or a more predictable response to the user message. Temperature is a floating point number, usually between 0 and 1, that is used to steer the model accordingly. Lower temperature values (such as 0) are more predictable and higher values (such as 1) are more creative, although even at 0 LLMs will never product 100% repeatable responses. Many tools simply use 0.8 as a default.
43
+
Depending on use case, it may be necessary for the LLM to produce a more creative or a more predictable response to the user message. Temperature is a floating point number, usually between 0 and 1, that is used to steer the model accordingly. Lower temperature values (such as 0) are more predictable and higher values (such as 1) are more creative, although even at 0 LLMs will never produce 100% repeatable responses. Many tools simply use 0.8 as a default.
44
44
45
45
Lastly, while experimenting with LLMs, especially with inferencing servers without a GPU, it is recommended to constrain the LLM from producing overly verbose responses by setting the Max Tokens to an appropriate threshold. This also helps coach the LLM to be more concise with its responses, which can be helpful during testing.
46
46
@@ -99,7 +99,7 @@ For this section we will be exercising the model with some basic prompts to gain
99
99
100
100
=== Open Dev UI with LangChain4j Chat
101
101
102
-
Open your workspace in the Dev Spaces per the instructions in the prior section.
102
+
Open your workspace in Dev Spaces per the instructions in the prior section.
103
103
104
104
Spawn a terminal window within the IDE by clicking on the icon with three parallel bars in the upper left corner of the screen. Choose "Terminal" and then "New Terminal" menu entry from the list.
105
105
@@ -397,7 +397,7 @@ If you haven't created *a new Gen AI email service* in the previous module yet,
397
397
sh ${PROJECT_SOURCE}/scripts/create-email-ai-service.sh
398
398
----
399
399
400
-
Access the https://parasol-app-{user}-dev-parasol-app-{user}-dev.{openshift_cluster_ingress_domain}[Parasol web page^]to verify the Gen AI email service.
400
+
Access the https://parasol-app-{user}-dev-parasol-app-{user}-dev.{openshift_cluster_ingress_domain}[Parasol web page^],to verify the Gen AI email service. To access the email service, click on the `Email Generate` tab on the left and use it in the following sections:
@@ -540,19 +540,17 @@ During a prior workshop activity, email response generation using LangChain4j wa
540
540
541
541
image::prompt/parasol-generate-email-response-form.png[Generate Email Response Web Form]
542
542
543
-
IMPORTANT: Section 5 of the Parasol AI Developer Workflow module provides steps for introducing a new email feature into the application using generative AI. The following section builds upon this feature with new capabilities. If you have not yet completed that section, you should either do so now or use the following script to automatically incorporate those changes for modification here.
544
-
545
543
The Parasol Insurance application is invoking the LLM using LangChain4j's AI Service framework. This approach leverages Java Interfaces created by the user with annotations that define the prompt using a String. Let's open the AI Service that was previously created for Email Response Generation:
Additionally, we must add the address field to the REST Service's JSON Response.
655
653
656
-
Open the `parasol-insurance/app/src/main/java/org/parasol/model/EmailResponse.java` file to `replace` the constructor with the following code.
654
+
Open the `src/main/java/org/parasol/model/EmailResponse.java` file to `replace` the constructor with the following code.
657
655
658
656
[.console-input]
659
657
[source,java,subs="+attributes,macros+"]
@@ -663,7 +661,7 @@ public record EmailResponse(String subject, String message, String address) { }
663
661
664
662
image::prompt/new-field-added-to-email-response.png[Add "address" field to EmailResponse Record]
665
663
666
-
Now let's revisit the web form and test out the new AI-generated attribute:
664
+
Now let's revisit the https://parasol-app-{user}-dev-parasol-app-{user}-dev.{openshift_cluster_ingress_domain}[web form^] and test out the new AI-generated attribute:
667
665
668
666
- Reload the Email generate page. It take 10 - 20 seconds to recompile and apply the new prompt in the Quarkus dev mode.
669
667
- Copy and paste the new claim #1 example from the prior section into the form.
0 commit comments