You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: _posts/2024-11-29-quarkus-jlama.adoc
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,7 +21,7 @@ https://github.com/tjake/Jlama[Jlama] is a library allowing to execute LLM infer
21
21
22
22
Jlama is well integrated with Quarkus through the https://quarkus.io/extensions/io.quarkiverse.langchain4j/quarkus-langchain4j-jlama/[dedicated lanchain4j based extension]. Note that for performance reasons Jlama uses the https://openjdk.org/jeps/469[Vector API] which is still in preview in Java 23, and very likely will be released as a supported feature in Java 25.
23
23
24
-
In essence Jlama makes it possible to serve a LLM in Java, eventually directly embedded in the same JVM running your Java application, but why could this be useful? Actually this is desirable in many use cases and presents a number of relevant advantages like the following:
24
+
In essence Jlama makes it possible to serve an LLM in Java, directly embedded in the same JVM running your Java application, but why could this be useful? Actually this is desirable in many use cases and presents a number of relevant advantages like the following:
25
25
26
26
. *Fast development/prototyping*: Not having to install, configure and interact with an external server can make the development of a LLM-based Java application much easier.
27
27
. *Easy models testing*: Running the LLM inference embedded in the JVM also makes it easier to test different models and their integration during the development phase.
0 commit comments