Skip to content

Commit f132826

Browse files
mariofuscogeoand
andauthored
Update _posts/2024-11-29-quarkus-jlama.adoc
Co-authored-by: Georgios Andrianakis <[email protected]>
1 parent b646e74 commit f132826

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

_posts/2024-11-29-quarkus-jlama.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ https://github.com/tjake/Jlama[Jlama] is a library allowing to execute LLM infer
2121

2222
Jlama is well integrated with Quarkus through the https://quarkus.io/extensions/io.quarkiverse.langchain4j/quarkus-langchain4j-jlama/[dedicated lanchain4j based extension]. Note that for performance reasons Jlama uses the https://openjdk.org/jeps/469[Vector API] which is still in preview in Java 23, and very likely will be released as a supported feature in Java 25.
2323

24-
In essence Jlama makes it possible to serve a LLM in Java, eventually directly embedded in the same JVM running your Java application, but why could this be useful? Actually this is desirable in many use cases and presents a number of relevant advantages like the following:
24+
In essence Jlama makes it possible to serve an LLM in Java, directly embedded in the same JVM running your Java application, but why could this be useful? Actually this is desirable in many use cases and presents a number of relevant advantages like the following:
2525

2626
. *Fast development/prototyping*: Not having to install, configure and interact with an external server can make the development of a LLM-based Java application much easier.
2727
. *Easy models testing*: Running the LLM inference embedded in the JVM also makes it easier to test different models and their integration during the development phase.

0 commit comments

Comments
 (0)