You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: app/src/main/res/values/strings.xml
+8Lines changed: 8 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -32,11 +32,19 @@
32
32
<stringname="chat_settings_desc_temp">Temperature is a parameter that controls the randomness and creativity of LLM outputs, with lower temperatures producing more deterministic and focused responses, and higher temperatures leading to more diverse and creative outputs.</string>
33
33
<stringname="chat_settings_desc_ctx_length">The context length of a large language model (LLM) refers to the maximum number of tokens (words or subwords) it can process in a single input or output sequence. Larger context sizes need more memory.</string>
34
34
<stringname="chat_settings_desc_n_threads">The number of CPU threads to use for inference.</string>
35
+
<stringname="chat_settings_desc_topP">Top-p sampling selects the smallest set of most probable tokens whose cumulative probability exceeds a threshold, allowing for a more dynamic and diverse output.</string>
36
+
<stringname="chat_settings_desc_topK">Top-k sampling limits the model\'s choices to the k most likely next tokens, ensuring a more focused and predictable output.</string>
37
+
<stringname="chat_settings_desc_xtcP">...remove all except the least probable one from sampling, with probability xtcP</string>
38
+
<stringname="chat_settings_desc_xtcT">If there are multiple tokens with predicted probability at least the threshold xtcT...</string>
0 commit comments