Skip to content

Commit e48fd4c

Browse files
authored
Refactor models.py (#209)
* add new params to readme and plan_editor * del OpenaiHandler, OllamaHandler. local url in models -> config.toml * remove linux-only comment for delete dir flag * add params to readme * rename local_url -> itmo_local_url
1 parent 3707890 commit e48fd4c

File tree

10 files changed

+106
-249
lines changed

10 files changed

+106
-249
lines changed

README.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -187,9 +187,12 @@ documentation, see the [GitHub Action Workflow Generator README](./osa_tool/gith
187187
| `--base-url` | URL of the provider compatible with API OpenAI | `https://api.openai.com/v1` |
188188
| `--model` | Specific LLM model to use | `gpt-3.5-turbo` |
189189
| `-m`, `--mode` | Operation mode for repository processing: `basic`, `auto` (default), or `advanced`. | `auto` |
190-
| `--delete-dir` | Enable deleting the downloaded repository after processing (**Linux only**) | `disabled` |
190+
| `--delete-dir` | Enable deleting the downloaded repository after processing | `disabled` |
191191
| `--no-fork` | Avoid create fork for target repository | `False` |
192192
| `--no-pull-request` | Avoid create pull request for target repository | `False` |
193+
| `--top_p` | Nucleus sampling probability | `null` |
194+
| `--temperature` | Sampling temperature to use for the LLM output (0 = deterministic, 1 = creative). | `null` |
195+
| `--max_tokens` | Maximum number of tokens the model can generate in a single response | `null` |
193196

194197
To learn how to work with the interactive CLI and view descriptions of all available keys, visit
195198
the [CLI usage guide](./osa_tool/scheduler/README.md).

docs/index.md

Lines changed: 14 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -148,17 +148,20 @@ Docker, ensure that you upload the PDF file to the OSA folder before building th
148148

149149
### Configuration
150150

151-
| Flag | Description | Default |
152-
|----------------------|-----------------------------------------------------------------------------|-----------------------------|
153-
| `-r`, `--repository` | URL of the GitHub repository (**Mandatory**) | |
154-
| `--api` | LLM API service provider | `llama` |
155-
| `--base-url` | URL of the provider compatible with API OpenAI | `https://api.openai.com/v1` |
156-
| `--model` | Specific LLM model to use | `gpt-3.5-turbo` |
157-
| `--article` | Link to the pdf file of the article | `None` |
158-
| `--translate-dirs` | Enable automatic translation of the directory name into English | `disabled` |
159-
| `--delete-dir` | Enable deleting the downloaded repository after processing (**Linux only**) | `disabled` |
160-
| `--no-fork` | Avoid create fork for target repository | `False` |
161-
| `--no-pull-request` | Avoid create pull request for target repository | `False` |
151+
| Flag | Description | Default |
152+
|----------------------|-----------------------------------------------------------------------------------|-----------------------------|
153+
| `-r`, `--repository` | URL of the GitHub repository (**Mandatory**) | |
154+
| `--api` | LLM API service provider | `llama` |
155+
| `--base-url` | URL of the provider compatible with API OpenAI | `https://api.openai.com/v1` |
156+
| `--model` | Specific LLM model to use | `gpt-3.5-turbo` |
157+
| `--article` | Link to the pdf file of the article | `None` |
158+
| `--translate-dirs` | Enable automatic translation of the directory name into English | `disabled` |
159+
| `--delete-dir` | Enable deleting the downloaded repository after processing | `disabled` |
160+
| `--no-fork` | Avoid create fork for target repository | `False` |
161+
| `--no-pull-request` | Avoid create pull request for target repository | `False` |
162+
| `--top_p` | Nucleus sampling probability | `null` |
163+
| `--temperature` | Sampling temperature to use for the LLM output (0 = deterministic, 1 = creative). | `null` |
164+
| `--max_tokens` | Maximum number of tokens the model can generate in a single response | `null` |
162165

163166
---
164167

osa_tool/config/settings.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -51,6 +51,7 @@ class ModelSettings(BaseModel):
5151
encoder: str
5252
host_name: AnyHttpUrl
5353
localhost: AnyHttpUrl
54+
itmo_local_url: AnyHttpUrl
5455
model: str
5556
path: str
5657
temperature: NonNegativeFloat

osa_tool/config/settings/arguments.yaml

Lines changed: 22 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -55,6 +55,27 @@ model:
5555
3. https://ollama.com/library
5656
default: "gpt-3.5-turbo"
5757

58+
temperature:
59+
aliases: [ "--temperature" ]
60+
type: str
61+
description: "Sampling temperature to use for the LLM output (0 = deterministic, 1 = creative)."
62+
default: null
63+
example: 0.3, 0.9
64+
65+
tokens:
66+
aliases: [ "--max-tokens" ]
67+
type: str
68+
description: "Maximum number of tokens the model can generate in a single response."
69+
default: 4096
70+
example: 256, 1024
71+
72+
top_p:
73+
aliases: [ "--top-p" ]
74+
type: str
75+
description: "Nucleus sampling probability (1.0 = all tokens considered)."
76+
default: null
77+
example: 0.8, 0.95
78+
5879
article:
5980
aliases: [ "--article" ]
6081
type: str
@@ -82,7 +103,7 @@ convert_notebooks:
82103
delete_dir:
83104
aliases: [ "--delete-dir" ]
84105
type: flag
85-
description: "Enable deleting the downloaded repository after processing. (Linux only)"
106+
description: "Enable deleting the downloaded repository after processing."
86107
default: false
87108

88109
ensure_license:

osa_tool/config/settings/config.toml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,7 @@ context_window = 4096
2727
encoder = "cl100k_base"
2828
host_name = "https://api.openai.com/v1"
2929
localhost = "http://localhost:11434/"
30+
itmo_local_url = "http://10.32.15.21:6672"
3031
model = "gpt-3.5-turbo"
3132
path = "generate"
3233
temperature = 0.05

0 commit comments

Comments
 (0)