Skip to content

Commit 9ad5e60

Browse files
authored
examples : fix some typos in examples/model-conversion/README.md (ggml-org#15477)
Signed-off-by: Jie Fu <[email protected]>
1 parent 715a6db commit 9ad5e60

File tree

1 file changed

+8
-8
lines changed

1 file changed

+8
-8
lines changed

examples/model-conversion/README.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ The motivation for having this is that the conversion process can often be an
66
iterative process, where the original model is inspected, converted, updates
77
made to llama.cpp, converted again, etc. Once the model has been converted it
88
needs to be verified against the original model, and then optionally quantified,
9-
and is some cases perplexity checked of the quantized model. And finally the
9+
and in some cases perplexity checked of the quantized model. And finally the
1010
model/models need to the ggml-org on Hugging Face. This tool/example tries to
1111
help with this process.
1212

@@ -62,7 +62,7 @@ Command line arguments take precedence over environment variables when both are
6262

6363
In cases where the transformer implementation for the model has not been released
6464
yet it is possible to set the environment variable `UNRELEASED_MODEL_NAME` which
65-
will the cause the transformer implementation to be loaded explicitely and not
65+
will then cause the transformer implementation to be loaded explicitely and not
6666
use AutoModelForCausalLM:
6767
```
6868
export UNRELEASED_MODEL_NAME=SomeNewModel
@@ -87,7 +87,7 @@ from the converted model.
8787
# Or using command line argument
8888
(venv) $ make causal-run-original-model MODEL_PATH=~/work/ai/models/some_model
8989
```
90-
This command will save two file to the `data` directory, one is a binary file
90+
This command will save two files to the `data` directory, one is a binary file
9191
containing logits which will be used for comparison with the converted model
9292
later, and the other is a text file which allows for manual visual inspection.
9393

@@ -128,11 +128,11 @@ Quantized model saved to: /path/to/quantized/model-Q8_0.gguf
128128
Export the quantized model path to QUANTIZED_MODEL variable in your environment
129129
```
130130
This will show the path to the quantized model in the terminal, which can then
131-
be used set the `QUANTIZED_MODEL` environment variable:
131+
be used to set the `QUANTIZED_MODEL` environment variable:
132132
```console
133133
export QUANTIZED_MODEL=/path/to/quantized/model-Q8_0.gguf
134134
```
135-
The the quantized model can be run using the following command:
135+
Then the quantized model can be run using the following command:
136136
```console
137137
(venv) $ make causal-run-quantized-model
138138
```
@@ -229,11 +229,11 @@ Quantized model saved to: /path/to/quantized/model-Q8_0.gguf
229229
Export the quantized model path to QUANTIZED_EMBEDDING_MODEL variable in your environment
230230
```
231231
This will show the path to the quantized model in the terminal, which can then
232-
be used set the `QUANTIZED_EMBEDDING_MODEL` environment variable:
232+
be used to set the `QUANTIZED_EMBEDDING_MODEL` environment variable:
233233
```console
234234
export QUANTIZED_EMBEDDING_MODEL=/path/to/quantized/model-Q8_0.gguf
235235
```
236-
The the quantized model can be run using the following command:
236+
Then the quantized model can be run using the following command:
237237
```console
238238
(venv) $ make embedding-run-quantized-model
239239
```
@@ -246,7 +246,7 @@ token/logits file:
246246
```console
247247
(venv) $ make perplexity-run QUANTIZED_MODEL=~/path/to/quantized/model.gguf
248248
```
249-
This will use the wikitext dataset to run the perplexity evaluation and and
249+
This will use the wikitext dataset to run the perplexity evaluation and
250250
output the perplexity score to the terminal. This value can then be compared
251251
with the perplexity score of the unquantized model.
252252

0 commit comments

Comments
 (0)