Skip to content

Commit 4cff53b

Browse files
author
camel-docs-bot
committed
Auto-update documentation after merge [skip ci]
1 parent 9d846e2 commit 4cff53b

File tree

4 files changed

+77
-0
lines changed

4 files changed

+77
-0
lines changed

docs/mintlify/docs.json

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -218,6 +218,7 @@
218218
{
219219
"group": "Configs",
220220
"pages": [
221+
"reference/camel.configs.aihubmix_config",
221222
"reference/camel.configs.aiml_config",
222223
"reference/camel.configs.amd_config",
223224
"reference/camel.configs.anthropic_config",
@@ -299,6 +300,7 @@
299300
"group": "Models",
300301
"pages": [
301302
"reference/camel.models._utils",
303+
"reference/camel.models.aihubmix_model",
302304
"reference/camel.models.aiml_model",
303305
"reference/camel.models.amd_model",
304306
"reference/camel.models.anthropic_model",
Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
<a id="camel.configs.aihubmix_config"></a>
2+
3+
<a id="camel.configs.aihubmix_config.AihubMixConfig"></a>
4+
5+
## AihubMixConfig
6+
7+
```python
8+
class AihubMixConfig(BaseConfig):
9+
```
10+
11+
Defines the parameters for generating chat completions using the
12+
AihubMix API.
13+
14+
**Parameters:**
15+
16+
- **temperature** (float, optional): Sampling temperature to use, between :obj:`0` and :obj:`2`. Higher values make the output more random, while lower values make it more focused and deterministic. (default: :obj:`0.8`)
17+
- **max_tokens** (int, optional): The maximum number of tokens to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length. (default: :obj:`1024`)
18+
- **top_p** (float, optional): An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So :obj:`0.1` means only the tokens comprising the top 10% probability mass are considered. (default: :obj:`1`)
19+
- **frequency_penalty** (float, optional): Number between :obj:`-2.0` and :obj:`2.0`. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. (default: :obj:`0`)
20+
- **presence_penalty** (float, optional): Number between :obj:`-2.0` and :obj:`2.0`. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. (default: :obj:`0`)
21+
- **stream** (bool, optional): If True, partial message deltas will be sent as data-only server-sent events as they become available. (default: :obj:`False`)
22+
- **web_search_options** (dict, optional): Search model's web search options, only supported by specific search models. (default: :obj:`None`)
23+
- **tools** (list[FunctionTool], optional): A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.
24+
- **tool_choice** (Union[dict[str, str], str], optional): Controls which (if any) tool is called by the model. :obj:`"none"` means the model will not call any tool and instead generates a message. :obj:`"auto"` means the model can pick between generating a message or calling one or more tools. :obj:`"required"` means the model must call one or more tools. Specifying a particular tool via `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that tool. :obj:`"none"` is the default when no tools are present. :obj:`"auto"` is the default if tools are present.
25+
- **parallel_tool_calls** (bool, optional): A parameter specifying whether the model should call tools in parallel or not. (default: :obj:`None`)
26+
- **extra_headers**: Optional[Dict[str, str]]: Extra headers to use for the model. (default: :obj:`None`)
Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
<a id="camel.models.aihubmix_model"></a>
2+
3+
<a id="camel.models.aihubmix_model.AihubMixModel"></a>
4+
5+
## AihubMixModel
6+
7+
```python
8+
class AihubMixModel(OpenAICompatibleModel):
9+
```
10+
11+
AihubMix API in a unified OpenAICompatibleModel interface.
12+
13+
**Parameters:**
14+
15+
- **model_type** (Union[ModelType, str]): Model for which a backend is created.
16+
- **model_config_dict** (Optional[Dict[str, Any]], optional): A dictionary that will be fed into OpenAI client. If :obj:`None`, :obj:`{}` will be used. (default: :obj:`None`)
17+
- **api_key** (Optional[str], optional): The API key for authenticating with AihubMix service. (default: :obj:`None`)
18+
- **url** (Optional[str], optional): The URL to AihubMix service. If not provided, :obj:`https://aihubmix.com/v1` will be used. (default: :obj:`None`)
19+
- **token_counter** (Optional[BaseTokenCounter], optional): Token counter to use for the model. If not provided, :obj:`OpenAITokenCounter( ModelType.GPT_4O_MINI)` will be used. (default: :obj:`None`)
20+
- **timeout** (Optional[float], optional): The timeout value in seconds for API calls. If not provided, will fall back to the MODEL_TIMEOUT environment variable or default to 180 seconds. (default: :obj:`None`)
21+
- **max_retries** (int, optional): Maximum number of retries for API calls. (default: :obj:`3`) **kwargs (Any): Additional arguments to pass to the client initialization.
22+
23+
<a id="camel.models.aihubmix_model.AihubMixModel.__init__"></a>
24+
25+
### __init__
26+
27+
```python
28+
def __init__(
29+
self,
30+
model_type: Union[ModelType, str],
31+
model_config_dict: Optional[Dict[str, Any]] = None,
32+
api_key: Optional[str] = None,
33+
url: Optional[str] = None,
34+
token_counter: Optional[BaseTokenCounter] = None,
35+
timeout: Optional[float] = None,
36+
max_retries: int = 3,
37+
**kwargs: Any
38+
):
39+
```

docs/mintlify/reference/camel.types.enums.mdx

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -841,6 +841,16 @@ def is_crynux(self):
841841

842842
Returns whether this platform is Crynux.
843843

844+
<a id="camel.types.enums.ModelPlatformType.is_aihubmix"></a>
845+
846+
### is_aihubmix
847+
848+
```python
849+
def is_aihubmix(self):
850+
```
851+
852+
Returns whether this platform is AihubMix.
853+
844854
<a id="camel.types.enums.AudioModelType"></a>
845855

846856
## AudioModelType

0 commit comments

Comments
 (0)