We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
2 parents fc22734 + 24ced1a commit eb8f146Copy full SHA for eb8f146
training/generate.py
@@ -70,7 +70,7 @@ def __init__(
70
71
Args:
72
do_sample (bool, optional): Whether or not to use sampling. Defaults to True.
73
- max_new_tokens (int, optional): Max new tokens after the prompt to generate. Defaults to 128.
+ max_new_tokens (int, optional): Max new tokens after the prompt to generate. Defaults to 256.
74
top_p (float, optional): If set to float < 1, only the smallest set of most probable tokens with
75
probabilities that add up to top_p or higher are kept for generation. Defaults to 0.92.
76
top_k (int, optional): The number of highest probability vocabulary tokens to keep for top-k-filtering.
0 commit comments