@@ -148,17 +148,20 @@ Docker, ensure that you upload the PDF file to the OSA folder before building th
148148
149149### Configuration
150150
151- | Flag | Description | Default |
152- | ----------------------| -----------------------------------------------------------------------------| -----------------------------|
153- | ` -r ` , ` --repository ` | URL of the GitHub repository (** Mandatory** ) | |
154- | ` --api ` | LLM API service provider | ` llama ` |
155- | ` --base-url ` | URL of the provider compatible with API OpenAI | ` https://api.openai.com/v1 ` |
156- | ` --model ` | Specific LLM model to use | ` gpt-3.5-turbo ` |
157- | ` --article ` | Link to the pdf file of the article | ` None ` |
158- | ` --translate-dirs ` | Enable automatic translation of the directory name into English | ` disabled ` |
159- | ` --delete-dir ` | Enable deleting the downloaded repository after processing (** Linux only** ) | ` disabled ` |
160- | ` --no-fork ` | Avoid create fork for target repository | ` False ` |
161- | ` --no-pull-request ` | Avoid create pull request for target repository | ` False ` |
151+ | Flag | Description | Default |
152+ | ----------------------| -----------------------------------------------------------------------------------| -----------------------------|
153+ | ` -r ` , ` --repository ` | URL of the GitHub repository (** Mandatory** ) | |
154+ | ` --api ` | LLM API service provider | ` llama ` |
155+ | ` --base-url ` | URL of the provider compatible with API OpenAI | ` https://api.openai.com/v1 ` |
156+ | ` --model ` | Specific LLM model to use | ` gpt-3.5-turbo ` |
157+ | ` --article ` | Link to the pdf file of the article | ` None ` |
158+ | ` --translate-dirs ` | Enable automatic translation of the directory name into English | ` disabled ` |
159+ | ` --delete-dir ` | Enable deleting the downloaded repository after processing | ` disabled ` |
160+ | ` --no-fork ` | Avoid create fork for target repository | ` False ` |
161+ | ` --no-pull-request ` | Avoid create pull request for target repository | ` False ` |
162+ | ` --top_p ` | Nucleus sampling probability | ` null ` |
163+ | ` --temperature ` | Sampling temperature to use for the LLM output (0 = deterministic, 1 = creative). | ` null ` |
164+ | ` --max_tokens ` | Maximum number of tokens the model can generate in a single response | ` null ` |
162165
163166---
164167
0 commit comments