Skip to content

Commit 855a409

Browse files
author
Scott Straughan
committed
Tweaks to code block.
1 parent c22ebd5 commit 855a409

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

_collections/_updates/2024-07-31-porting-ai-codes-from-cuda-to-sycl-and-oneapi-one-llama-at-a-time-part-one.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -106,7 +106,7 @@ The first step is to clone the llama.cpp repository, and configure cmake as usua
106106
$ git clone https://github.com/ggerganov/llama.cpp.git
107107
$ cd llama.cpp
108108
$ git checkout 3c04bf6da89eaf4c7d317e0518f0687dfcbf2de7
109-
$ mkdir build && cd build
109+
$ mkdir build && cd build
110110
$ cmake .. -DLLAMA_CUBLAS=ON -DLLAMA_CUDA=ON -
111111
$ DCMAKE_CUDA_ARCHITECTURES=80
112112
```

_collections/_updates/2024-08-13-part-two-porting-ai-codes-from-cuda-to-sycl-and-oneapi-one-llama-at-a-time.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ Now we are going to build the converted code directly using the CMake file that
2121
build the main binary for llama.cpp.
2222

2323
```shell
24-
$ cd dpct_out && mkdir syclbuild && cd syclbuild
24+
$ cd dpct_out && mkdir syclbuild && cd syclbuild
2525
$ MKLROOT=/home/ruyman/soft/mkl CC=icx CXX=icpx cmake .. -DLLAMA_CUBLAS=ON -DCMAKE_CUDA_ARCHITECTURES=80 -DCMAKE_CXX_FLAGS="-fsycl -fsycl-targets=nvptx64-nvidia-cuda -L${MKLROOT}/lib"
2626
$ make main
2727
```

0 commit comments

Comments
 (0)