You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: adalflow/adalflow/core/generator.py
+15Lines changed: 15 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -100,6 +100,8 @@ def __init__(
100
100
# args for the cache
101
101
cache_path: Optional[str] =None,
102
102
use_cache: bool=False,
103
+
# args for model type
104
+
model_type: ModelType=ModelType.LLM,
103
105
) ->None:
104
106
r"""The default prompt is set to the DEFAULT_ADALFLOW_SYSTEM_PROMPT. It has the following variables:
105
107
- task_desc_str
@@ -110,6 +112,17 @@ def __init__(
110
112
- steps_str
111
113
You can preset the prompt kwargs to fill in the variables in the prompt using prompt_kwargs.
112
114
But you can replace the prompt and set any variables you want and use the prompt_kwargs to fill in the variables.
115
+
116
+
Args:
117
+
model_client (ModelClient): The model client to use for the generator.
118
+
model_kwargs (Dict[str, Any], optional): The model kwargs to pass to the model client. Defaults to {}. Please refer to :ref:`ModelClient<components-model_client>` for the details on how to set the model_kwargs for your specific model if it is from our library.
119
+
template (Optional[str], optional): The template for the prompt. Defaults to :ref:`DEFAULT_ADALFLOW_SYSTEM_PROMPT<core-default_prompt_template>`.
120
+
prompt_kwargs (Optional[Dict], optional): The preset prompt kwargs to fill in the variables in the prompt. Defaults to None.
121
+
output_processors (Optional[Component], optional): The output processors after model call. It can be a single component or a chained component via ``Sequential``. Defaults to None.
122
+
name (Optional[str], optional): The name of the generator. Defaults to None.
123
+
cache_path (Optional[str], optional): The path to save the cache. Defaults to None.
124
+
use_cache (bool, optional): Whether to use cache. Defaults to False.
125
+
model_type (ModelType, optional): The type of model (EMBEDDER, LLM, or IMAGE_GENERATION). Defaults to ModelType.LLM.
0 commit comments