Skip to content

Commit b3758b4

Browse files
authored
docs(README): improve contributing docs (#42)
* docs: improve contributing docs * fix: make dev env work on Windows * docs: add mdn links to generated documentation
1 parent 01b89ce commit b3758b4

File tree

5 files changed

+385
-18
lines changed

5 files changed

+385
-18
lines changed

CONTRIBUTING.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,6 @@
1+
## Development
2+
To set up your development environment, read [DEVELOPMENT.md](https://github.com/withcatai/node-llama-cpp/blob/master/DEVELOPMENT.md).
3+
14
## <a name="commit"></a> Commit Message Guidelines
25

36
This repository has very precise rules over how git commit messages can be formatted.

DEVELOPMENT.md

Lines changed: 59 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,59 @@
1+
# How to contribute to `node-llama-cpp`
2+
This document describes how to set up your development environment to contribute to `node-llama-cpp`.
3+
4+
## Prerequisites
5+
- [Git](https://git-scm.com/). [GitHub's Guide to Installing Git](https://help.github.com/articles/set-up-git) is a good source of information.
6+
- [Node.js](https://nodejs.org/en/) (v18 or higher)
7+
- [cmake dependencies](https://github.com/cmake-js/cmake-js#installation:~:text=projectRoot/build%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%5Bstring%5D-,Requirements%3A,-CMake) - make sure the required dependencies of `cmake` are installed on your machine. More info is available [here](https://github.com/cmake-js/cmake-js#installation:~:text=projectRoot/build%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%5Bstring%5D-,Requirements%3A,-CMake) (you don't necessarily have to install `cmake`, just the dependencies)
8+
9+
## Setup
10+
1. [Fork `node-llama-cpp` repo](https://github.com/withcatai/node-llama-cpp/fork)
11+
2. Clone your forked repo to your local machine
12+
3. Install dependencies:
13+
```bash
14+
npm install
15+
```
16+
4. Build the CLI, use the CLI to clone the latest release of `llama.cpp`, and build it from source:
17+
```bash
18+
npm run dev:setup
19+
```
20+
21+
## Development
22+
Whenever you add a new functionality to `node-llama-cpp`, consider improving the CLI to reflect this change.
23+
24+
To test whether you local setup works, download a model and try using it with the `chat` command.
25+
26+
### Get a model file
27+
We recommend you to get a GGUF model from the [TheBloke on Hugging Face](https://huggingface.co/TheBloke?search_models=GGUF).
28+
29+
We recommend you to start by getting a small model that doesn't have a lot of parameters just to ensure that your setup works, so try downloading a `7B` parameters model first (search for models with both `7B` and `GGUF` in their name).
30+
31+
For improved download speeds, you can use [`ipull`](https://www.npmjs.com/package/ipull) to download the model:
32+
```bash
33+
npx ipull <model-file-ul>
34+
```
35+
36+
### Validate your setup by chatting with a model
37+
To validate that your setup works, run the following command to chat with the model you downloaded:
38+
```bash
39+
npx run dev:build; node ./dist/cli/cli.js chat --model <path-to-model-file-on-your-computer>
40+
```
41+
42+
Try telling the model `Hi there` and see how it reacts. Any response from the model means that your setup works.
43+
If the response looks weird or doesn't make sense, try using a different model.
44+
45+
If the model doesn't stop generating output, try using a different chat wrapper. For example:
46+
```bash
47+
npx run dev:build; node ./dist/cli/cli.js chat --wrapper llamaChat --model <path-to-model-file-on-your-computer>
48+
```
49+
50+
> **Important:** Make sure you always run `npm run dev:build` before running the CLI to make sure that your code changes are reflected in the CLI.
51+
52+
### Debugging
53+
To run a chat session with a debugger, configure your IDE to run the following command with a debugger:
54+
```bash
55+
node --loader ts-node/esm ./src/cli/cli.ts chat --model <path-to-model-file-on-your-computer>
56+
```
57+
58+
## Opening a pull request
59+
To open a pull request, read the [CONTRIBUTING.md](https://github.com/withcatai/node-llama-cpp/blob/master/CONTRIBUTING.md) guidelines.

README.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -322,6 +322,9 @@ Options:
322322
-v, --version Show version number [boolean]
323323
```
324324

325+
## Contributing
326+
To contribute to `node-llama-cpp` read [CONTRIBUTING.md](https://github.com/withcatai/node-llama-cpp/blob/master/DEVELOPMENT.md).
327+
325328
## Acknowledgements
326329
* llama.cpp: [ggerganov/llama.cpp](https://github.com/ggerganov/llama.cpp)
327330

0 commit comments

Comments
 (0)