Skip to content

Commit b299d18

Browse files
authored
Update README.md
1 parent 221dade commit b299d18

File tree

1 file changed

+37
-4
lines changed

1 file changed

+37
-4
lines changed

README.md

Lines changed: 37 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -91,7 +91,10 @@ cd GPULlama3.java
9191

9292
# Update the submodules to match the exact commit point recorded in this repository
9393
git submodule update --recursive
94+
```
9495

96+
#### On Linux or macOS
97+
```bash
9598
# Enter the TornadoVM submodule directory
9699
cd external/tornadovm
97100

@@ -110,9 +113,6 @@ source setvars.sh
110113
# Navigate back to the project root directory
111114
cd ../../
112115

113-
# Make the llama-tornado script executable
114-
chmod +x llama-tornado
115-
116116
# Source the project-specific environment paths -> this will ensure the correct paths are set for the project and the TornadoVM SDK
117117
# Expect to see: [INFO] Environment configured for Llama3 with TornadoVM at: /home/YOUR_PATH_TO_TORNADOVM
118118
source set_paths
@@ -124,6 +124,39 @@ make
124124
# Run the model (make sure you have downloaded the model file first - see below)
125125
./llama-tornado --gpu --verbose-init --opencl --model beehive-llama-3.2-1b-instruct-fp16.gguf --prompt "tell me a joke"
126126
```
127+
128+
#### On Windows
129+
```bash
130+
# Enter the TornadoVM submodule directory
131+
cd external/tornadovm
132+
133+
# Optional: Create and activate a Python virtual environment if needed
134+
python -m venv .venv
135+
.venv\Scripts\activate.bat
136+
.\bin\windowsMicrosoftStudioTools2022.cmd
137+
138+
# Install TornadoVM with a supported JDK 21 and select the backends (--backend opencl,ptx).
139+
# To see the compatible JDKs run: ./bin/tornadovm-installer --listJDKs
140+
# For example, to install with OpenJDK 21 and build the OpenCL backend, run:
141+
python bin\tornadovm-installer --jdk jdk21 --backend opencl
142+
143+
# Source the TornadoVM environment variables
144+
setvars.cmd
145+
146+
# Navigate back to the project root directory
147+
cd ../../
148+
149+
# Source the project-specific environment paths -> this will ensure the correct paths are set for the project and the TornadoVM SDK
150+
# Expect to see: [INFO] Environment configured for Llama3 with TornadoVM at: C:\Users\YOUR_PATH_TO_TORNADOVM
151+
set_paths.cmd
152+
153+
# Build the project using Maven (skip tests for faster build)
154+
# mvn clean package -DskipTests or just make
155+
make
156+
157+
# Run the model (make sure you have downloaded the model file first - see below)
158+
python llama-tornado --gpu --verbose-init --opencl --model beehive-llama-3.2-1b-instruct-fp16.gguf --prompt "tell me a joke"
159+
```
127160
-----------
128161

129162
The above model can we swapped with one of the other models, such as `beehive-llama-3.2-3b-instruct-fp16.gguf` or `beehive-llama-3.2-8b-instruct-fp16.gguf`, depending on your needs.
@@ -182,7 +215,7 @@ Run a model with a text prompt:
182215
#### GPU Execution (FP16 Model)
183216
Enable GPU acceleration with Q8_0 quantization:
184217
```bash
185-
llama-tornado --gpu --verbose-init --model beehive-llama-3.2-1b-instruct-fp16.gguf --prompt "tell me a joke"
218+
./llama-tornado --gpu --verbose-init --model beehive-llama-3.2-1b-instruct-fp16.gguf --prompt "tell me a joke"
186219
```
187220

188221
-----------

0 commit comments

Comments
 (0)