Skip to content

Commit a854bd7

Browse files
committed
docs: improve code documentation and formatting consistency
- Add a comment describing the `availableKeys` map - Correct capitalization and formatting of configuration key descriptions Signed-off-by: Bo-Yi Wu <[email protected]>
1 parent 1e19172 commit a854bd7

File tree

1 file changed

+23
-22
lines changed

1 file changed

+23
-22
lines changed

cmd/config_list.go

Lines changed: 23 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -14,29 +14,30 @@ func init() {
1414
}
1515

1616
// availableKeys is a list of available config keys
17+
// availableKeys is a map of configuration keys and their descriptions
1718
var availableKeys = map[string]string{
18-
"git.diff_unified": "generate diffs with <n> lines of context, default is 3",
19-
"git.exclude_list": "exclude file from git diff command",
20-
"git.template_file": "template file for commit message",
21-
"git.template_string": "template string for commit message",
22-
"openai.socks": "socks proxy",
23-
"openai.api_key": "openai api key",
24-
"openai.model": "openai model",
25-
"openai.org_id": "openai requesting organization",
26-
"openai.proxy": "http proxy",
27-
"output.lang": "summarizing language uses English by default",
28-
"openai.base_url": "what API base url to use.",
29-
"openai.timeout": "request timeout",
30-
"openai.max_tokens": "the maximum number of tokens to generate in the chat completion.",
31-
"openai.temperature": "What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.",
32-
"openai.provider": "service provider, only support 'openai' or 'azure'",
33-
"openai.skip_verify": "skip verify TLS certificate",
34-
"openai.headers": "custom headers for openai request",
35-
"openai.api_version": "openai api version",
36-
"openai.top_p": "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.",
37-
"openai.frequency_penalty": "Number between 0.0 and 1.0 that penalizes new tokens based on their existing frequency in the text so far. Decreases the model's likelihood to repeat the same line verbatim.",
38-
"openai.presence_penalty": "Number between 0.0 and 1.0 that penalizes new tokens based on whether they appear in the text so far. Increases the model's likelihood to talk about new topics.",
39-
"prompt.folder": "prompt template folder",
19+
"git.diff_unified": "Generate diffs with <n> lines of context, default is 3",
20+
"git.exclude_list": "Exclude file from git diff command",
21+
"git.template_file": "Template file for commit message",
22+
"git.template_string": "Template string for commit message",
23+
"openai.socks": "SOCKS proxy",
24+
"openai.api_key": "OpenAI API key",
25+
"openai.model": "OpenAI model",
26+
"openai.org_id": "OpenAI requesting organization",
27+
"openai.proxy": "HTTP proxy",
28+
"output.lang": "Summarizing language, defaults to English",
29+
"openai.base_url": "API base URL to use",
30+
"openai.timeout": "Request timeout",
31+
"openai.max_tokens": "Maximum number of tokens to generate in the chat completion",
32+
"openai.temperature": "Sampling temperature to use, between 0 and 2. Higher values like 0.8 make the output more random, while lower values like 0.2 make it more focused and deterministic",
33+
"openai.provider": "Service provider, supports 'openai' or 'azure'",
34+
"openai.skip_verify": "Skip verifying TLS certificate",
35+
"openai.headers": "Custom headers for OpenAI request",
36+
"openai.api_version": "OpenAI API version",
37+
"openai.top_p": "Nucleus sampling probability mass. For example, 0.1 means only the tokens comprising the top 10% probability mass are considered",
38+
"openai.frequency_penalty": "Penalty for new tokens based on their existing frequency in the text so far. Decreases the model's likelihood to repeat the same line verbatim",
39+
"openai.presence_penalty": "Penalty for new tokens based on whether they appear in the text so far. Increases the model's likelihood to talk about new topics",
40+
"prompt.folder": "Prompt template folder",
4041
}
4142

4243
// configListCmd represents the command to list the configuration values.

0 commit comments

Comments
 (0)