You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+20Lines changed: 20 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -457,6 +457,26 @@ taskflow:
457
457
458
458
The model version can then be updated by changing `gpt_latest` in the `model_config` file and applied across all taskflows that use the config.
459
459
460
+
In addition, model specific parameters can be provided via `model_config`. To do so, define a `model_settings` section in the `model_config` file. This section has to be a dictionary with the model names as keys:
461
+
462
+
```yaml
463
+
model_settings:
464
+
gpt_latest:
465
+
temperature: 1
466
+
reasoning:
467
+
effort: high
468
+
```
469
+
470
+
You do not need to set parameters for all models defined in the `models` section. When parameters are not set for a model, they'll fall back to the default value. However, all the settings in this section must belong to one of the models specified in the `models` section, otherwise an error will raise:
471
+
472
+
```yaml
473
+
model_settings:
474
+
new_model:
475
+
...
476
+
```
477
+
478
+
The above will result in an error because `new_model` is not defined in `models` section. Model parameters can also be set per task, and any settings defined in a task will override the settings in the config.
479
+
460
480
## Passing environment variables
461
481
462
482
Files of types `taskflow` and `toolbox` allow environment variables to be passed using the `env` field:
Copy file name to clipboardExpand all lines: doc/GRAMMAR.md
+12Lines changed: 12 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -91,6 +91,18 @@ Tasks can optionally specify which Model to use on the configured inference endp
91
91
92
92
Note that model identifiers may differ between OpenAI compatible endpoint providers, make sure you change your model identifier accordingly when switching providers. If not specified, a default LLM model (`gpt-4o`) is used.
93
93
94
+
Parameters to the model can also be specified in the task using the `model_settings` section:
95
+
96
+
```yaml
97
+
model: gpt-5-mini
98
+
model_settings:
99
+
temperature: 1
100
+
reasoning:
101
+
effort: high
102
+
```
103
+
104
+
If `model_settings` is absent, then the model parameters will fall back to either the default or the ones supplied in a `model_config`. However, any parameters supplied in the task will override those that are set in the `model_config`.
105
+
94
106
### Completion Requirement
95
107
96
108
Tasks can be marked as requiring completion, if a required task fails, the taskflow will abort. This defaults to false.
packagename (str): The name of the package. Used as a subdirectory under the data directory.
15
+
mcpname (str): The name of the MCP server. Used as a subdirectory under the package directory.
16
+
env_override (str | None): The name of an environment variable that, if set, overrides the default data directory location. If None, the default location is used.
17
+
18
+
Returns:
19
+
Path: The path to the created data directory for the MCP server.
20
+
"""
21
+
ifenv_override:
22
+
p=os.getenv(env_override)
23
+
ifp:
24
+
returnPath(p)
25
+
# Use [platformdirs](https://pypi.org/project/platformdirs/) to
0 commit comments