You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
systemMessage: 'The starting message that defines how model should behave.',
323
323
temperature: 'Controls the randomness of the generated text by affecting the probability distribution of the output tokens. Higher = more random, lower = more focused.',
324
-
dynatemp_range: 'The added value to the range of dynamic temperature, which adjusts probabilities by entropy of tokens.',
325
-
dynatemp_exponent: 'Smoothes out the probability redistribution based on the most probable token.',
324
+
dynatemp_range: 'Addon for the temperature sampler. The added value to the range of dynamic temperature, which adjusts probabilities by entropy of tokens.',
325
+
dynatemp_exponent: 'Addon for the temperature sampler. Smoothes out the probability redistribution based on the most probable token.',
326
326
top_k: 'Keeps only k top tokens.',
327
327
top_p: 'Limits tokens to those that together have a cumulative probability of at least p',
328
328
min_p: 'Limits tokens based on the minimum probability for a token to be considered, relative to the probability of the most likely token.',
329
-
xtc_probability: 'The probability that the XTC sampler will cut token from the beginning.',
330
-
xtc_threshold: 'If XTC is used, all top tokens with probabilities above this threshold will be cut.',
329
+
xtc_probability: 'XTC sampler cuts out top tokens; this parameter controls the chance of cutting tokens at all. 0 disables XTC.',
330
+
xtc_threshold: 'XTC sampler cuts out top tokens; this parameter controls the token probability that is required to cut that token.',
331
331
typical_p: 'Sorts and limits tokens based on the difference between log-probability and entropy.',
332
332
repeat_last_n: 'Last n tokens to consider for penalizing repetition',
333
333
repeat_penalty: 'Controls the repetition of token sequences in the generated text',
334
334
presence_penalty: 'Limits tokens based on whether they appear in the output or not.',
335
335
frequency_penalty: 'Limits tokens based on how often they appear in the output.',
336
-
dry_multiplier: 'Sets the DRY sampling multiplier.',
337
-
dry_base: 'Sets the DRY sampling base value.',
338
-
dry_allowed_length: 'Sets the allowed length for DRY sampling.',
339
-
dry_penalty_last_n: 'Sets DRY penalty for the last n tokens.',
336
+
dry_multiplier: 'DRY sampling reduces repetition in generated text even across long contexts. This parameter sets the DRY sampling multiplier.',
337
+
dry_base: 'DRY sampling reduces repetition in generated text even across long contexts. This parameter sets the DRY sampling base value.',
338
+
dry_allowed_length: 'DRY sampling reduces repetition in generated text even across long contexts. This parameter sets the allowed length for DRY sampling.',
339
+
dry_penalty_last_n: 'DRY sampling reduces repetition in generated text even across long contexts. This parameter sets DRY penalty for the last n tokens.',
340
340
max_tokens: 'The maximum number of token per output.',
0 commit comments