|
15 | 15 | --- |
16 | 16 | [`node-llama-cpp`](https://node-llama-cpp.withcat.ai) 3.0 is finally here. |
17 | 17 |
|
18 | | -With [`node-llama-cpp`](https://node-llama-cpp.withcat.ai), you can run large language models locally on your machine using the power of [`llama.cpp`](https://github.com/ggerganov/llama.cpp) with a simple and easy-to-use API. |
| 18 | +With [`node-llama-cpp`](https://node-llama-cpp.withcat.ai), you can run large language models locally on your machine using the power of [`llama.cpp`](https://github.com/ggml-org/llama.cpp) with a simple and easy-to-use API. |
19 | 19 |
|
20 | 20 | It includes everything you need, from downloading models, to running them in the most optimized way for your hardware, and integrating them in your projects. |
21 | 21 |
|
@@ -43,7 +43,7 @@ While `llama.cpp` is an amazing project, it's also highly technical and can be c |
43 | 43 | `node-llama-cpp` bridge that gap, making `llama.cpp` accessible to everyone, regardless of their experience level. |
44 | 44 |
|
45 | 45 | ### Performance |
46 | | -[`node-llama-cpp`](https://node-llama-cpp.withcat.ai) is built on top of [`llama.cpp`](https://github.com/ggerganov/llama.cpp), a highly optimized C++ library for running large language models. |
| 46 | +[`node-llama-cpp`](https://node-llama-cpp.withcat.ai) is built on top of [`llama.cpp`](https://github.com/ggml-org/llama.cpp), a highly optimized C++ library for running large language models. |
47 | 47 |
|
48 | 48 | `llama.cpp` supports many compute backends, including Metal, CUDA, and Vulkan. It also uses [Accelerate](https://developer.apple.com/accelerate/) on Mac. |
49 | 49 |
|
@@ -116,7 +116,7 @@ npx -y node-llama-cpp chat |
116 | 116 | Check out the [getting started guide](../guide/index.md) to learn how to use `node-llama-cpp`. |
117 | 117 |
|
118 | 118 | ## Thank You |
119 | | -`node-llama-cpp` is only possible thanks to the amazing work done on [`llama.cpp`](https://github.com/ggerganov/llama.cpp) by [Georgi Gerganov](https://github.com/ggerganov), [Slaren](https://github.com/slaren) and all the contributors from the community. |
| 119 | +`node-llama-cpp` is only possible thanks to the amazing work done on [`llama.cpp`](https://github.com/ggml-org/llama.cpp) by [Georgi Gerganov](https://github.com/ggerganov), [Slaren](https://github.com/slaren) and all the contributors from the community. |
120 | 120 |
|
121 | 121 | ## What's next? |
122 | 122 | Version 3.0 is a major milestone, but there's plenty more planned for the future. |
|
0 commit comments