Skip to content

Commit 24ca52f

Browse files
committed
LanguageModel: Disable top-p sampling by default
This matches upstream llama2.c, and prevents a confusing message with the basic example, which specifies a temperature (thus disabling the default top-p sampling).
1 parent 3188c50 commit 24ca52f

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

src/LanguageModel/index.js

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ class LanguageModel extends EventEmitter {
2020
tokenizerUrl: '', // if set, tokenizer.bin will be preloaded from provided URL (assumed to be embedded in llama2.data if not)
2121
maxTokens: 0, // how many tokens to generate (defaults to model's maximum)
2222
temperature: 1.0, // 0.0 = (deterministic) argmax sampling, 1.0 = baseline, don't set higher
23-
topp: 0.9, // p value in top-p (nucleus) sampling, 0 = off
23+
topp: 0, // p value in top-p (nucleus) sampling, 0 = off
2424
stopOnBosOrEos: true, // stop when encountering beginning-of-sequence or end-of-sequence token
2525
};
2626

0 commit comments

Comments
 (0)