|
1 | | -By default the app will assume there is a chat model on https://localhost:8000/v1 that exposes an OpenAI endpoint. |
| 1 | +# research |
2 | 2 |
|
3 | | -# Using Ollama |
| 3 | +This project uses Quarkus, the Supersonic Subatomic Java Framework and the |
| 4 | +[Model Context Protocol](https://modelcontextprotocol.io/) to implement a simple agentic app using multiple MCP servers and Quarkus + LangChain4j. |
4 | 5 |
|
5 | | -If you would like to use [Ollama](https://ollama.com/) instead, first install/run Ollama on your machine. Then do one of the following: |
| 6 | +If you want to learn more about Quarkus, please visit its website: https://quarkus.io/ . |
6 | 7 |
|
7 | | -## Building the app |
8 | | -When building the app, run `./mvnw clean package -DskipTests -Pollama` (or `quarkus build --clean --no-tests -Dollama`) |
| 8 | +## Running the application in dev mode |
9 | 9 |
|
10 | | -## Running dev mode |
11 | | -When running dev mode, run `./mvnw quarkus:dev -Pollama` (or `quarkus dev -Dollama`). |
| 10 | +Set the approriate API keys in `application.properties` and create a directory called `playground` in your clone if you wish to use the `filesystem` MCP server. |
12 | 11 |
|
13 | | -## Running tests |
14 | | -When running tests, run `./mvnw verify -Pollama` (or `quarkus build --tests -Dollama`) |
| 12 | +You can run your application in dev mode that enables live coding using: |
| 13 | +```shell script |
| 14 | +./mvnw compile quarkus:dev |
| 15 | +``` |
| 16 | +In Dev mode, you can use the Dev UI to chat with the LLM you've configured. |
15 | 17 |
|
16 | | -## Running the app outside dev mode |
17 | | -If you want to run the app outside dev mode, first build the app as described above, then run `java -Dquarkus.profile=ollama,prod -jar target/quarkus-app/quarkus-run.jar` |
| 18 | +> **_NOTE:_** Quarkus now ships with a Dev UI, which is available in dev mode only at http://localhost:8080/q/dev/. |
18 | 19 |
|
19 | | -# Using Jlama |
| 20 | +## Packaging and running the application |
20 | 21 |
|
21 | | -[Jlama](https://github.com/tjake/Jlama) is a pure Java inference engine. Using it the LLM inference will be executed directly embedded in the same JVM running the Quarkus application. |
| 22 | +The application can be packaged using: |
| 23 | +```shell script |
| 24 | +./mvnw package |
| 25 | +``` |
| 26 | +It produces the `quarkus-run.jar` file in the `target/quarkus-app/` directory. |
| 27 | +Be aware that it’s not an _über-jar_ as the dependencies are copied into the `target/quarkus-app/lib/` directory. |
22 | 28 |
|
23 | | -## Building the app |
24 | | -When building the app, run `./mvnw clean package -DskipTests -Pjlama` (or `quarkus build --clean --no-tests -Djlama`) |
| 29 | +The application is now runnable using `java -jar target/quarkus-app/quarkus-run.jar`. |
25 | 30 |
|
26 | | -## Running dev mode |
27 | | -When running in dev mode, Quarkus explicitly disables C2 compilation making Jlama extremely slow to the point of being unusable. This issue will be fixed in Quarkus 3.17, but for now it is strongly suggested to avoid using Jlama in dev mode. |
| 31 | +If you want to build an _über-jar_, execute the following command: |
| 32 | +```shell script |
| 33 | +./mvnw package -Dquarkus.package.type=uber-jar |
| 34 | +``` |
28 | 35 |
|
29 | | -## Running tests |
30 | | -When running tests, run `./mvnw verify -Pjlama` (or `quarkus build --tests -Djlama`) |
| 36 | +The application, packaged as an _über-jar_, is now runnable using `java -jar target/*-runner.jar`. |
31 | 37 |
|
32 | | -## Running the app |
33 | | -To run the app outside dev mode first run the app as described above, then run `java --enable-preview --enable-native-access=ALL-UNNAMED --add-modules jdk.incubator.vector -Dquarkus.profile=jlama,prod -jar target/quarkus-app/quarkus-run.jar`. This command launching the JVM enabling the Vector API that are required by Jlama, but still only a preview feature. |
| 38 | +## Creating a native executable |
| 39 | + |
| 40 | +You can create a native executable using: |
| 41 | +```shell script |
| 42 | +./mvnw package -Dnative |
| 43 | +``` |
| 44 | + |
| 45 | +Or, if you don't have GraalVM installed, you can run the native executable build in a container using: |
| 46 | +```shell script |
| 47 | +./mvnw package -Dnative -Dquarkus.native.container-build=true |
| 48 | +``` |
| 49 | + |
| 50 | +You can then execute your native executable with: `./target/research-1.0-SNAPSHOT-runner` |
| 51 | + |
| 52 | +If you want to learn more about building native executables, please consult https://quarkus.io/guides/maven-tooling. |
| 53 | + |
| 54 | +## Related Guides |
| 55 | + |
| 56 | +- LangChain4j Model Context Protocol client ([guide](https://docs.quarkiverse.io/quarkus-langchain4j/dev/index.html)): Provides the Model Context Protocol client-side implementation for LangChain4j |
| 57 | +- LangChain4j OpenAI ([guide](https://docs.quarkiverse.io/quarkus-langchain4j/dev/index.html)): Provides the basic integration with LangChain4j |
0 commit comments