Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
---
title: ZenMux Chat Model node documentation
description: Learn how to use the ZenMux Chat Model node in n8n. Follow technical documentation to integrate ZenMux Chat Model node into your workflows.
contentType: [integration, reference]
priority: high
---

# ZenMux Chat Model node

Use the ZenMux Chat Model node to use ZenMux's chat models with conversational agents.

On this page, you'll find the node parameters for the ZenMux Chat Model node and links to more resources.

/// note | Credentials
You can find authentication information for this node [here](/integrations/builtin/credentials/zenmux.md).
///

--8<-- "_snippets/integrations/builtin/cluster-nodes/sub-node-expression-resolution.md"

## Node parameters

### Model

Select the model to use to generate the completion.

n8n dynamically loads models from ZenMux and you'll only see the models available to your account.

## Node options

Use these options to further refine the node's behavior.

### Frequency Penalty

Use this option to control the chances of the model repeating itself. Higher values reduce the chance of the model repeating itself.

### Maximum Number of Tokens

Enter the maximum number of tokens used, which sets the completion length.

### Response Format

Choose **Text** or **JSON**. **JSON** ensures the model returns valid JSON.

### Presence Penalty

Use this option to control the chances of the model talking about new topics. Higher values increase the chance of the model talking about new topics.

### Sampling Temperature

Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations.

### Timeout

Enter the maximum request time in milliseconds.

### Max Retries

Enter the maximum number of times to retry a request.

### Top P

Use this option to set the probability the completion should use. Use a lower value to ignore less probable options.

## Templates and examples

<!-- see https://www.notion.so/n8n/Pull-in-templates-for-the-integrations-pages-37c716837b804d30a33b47475f6e3780 -->
[[ templatesWidget(page.title, 'zenmux-chat-model') ]]

## Related resources

As ZenMux is API-compatible with OpenAI, you can refer to [LangChains's OpenAI documentation](https://js.langchain.com/docs/integrations/chat/openai/) for more information about the service.

--8<-- "_snippets/integrations/builtin/cluster-nodes/langchain-overview-link.md"


39 changes: 39 additions & 0 deletions docs/integrations/builtin/credentials/zenmux.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
---
title: ZenMux credentials
description: Documentation for ZenMux credentials. Use these credentials to authenticate ZenMux in n8n, a workflow automation platform.
contentType: [integration, reference]
priority: critical
---

# ZenMux credentials

You can use these credentials to authenticate the following nodes:

- [Chat ZenMux](/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmchatzenmux.md)

## Prerequisites

Create a [ZenMux](https://zenmux.ai/) account.

## Supported authentication methods

- API key

## Related resources

Refer to [ZenMux's API documentation](https://docs.zenmux.ai/zh/api/openai/create-chat-completion.html) for more information about the service.

## Using API key

To configure this credential, you'll need:

- An **API Key**

To generate your API Key:

1. Login to your ZenMux account or [create](https://zenmux.ai/) an account.
2. Open your [API keys](https://zenmux.ai/settings/keys) page.
3. Select **Create new secret key** to create an API key, optionally naming the key.
4. Copy your key and add it as the **API Key** in n8n.

Refer to the [ZenMux Quick Start](https://docs.zenmux.ai/guide/quickstart.html) page for more information.
2 changes: 2 additions & 0 deletions nav.yml
Original file line number Diff line number Diff line change
Expand Up @@ -765,6 +765,7 @@ nav:
- OpenRouter Chat Model: integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmchatopenrouter.md
- Vercel AI Gateway Chat Model: integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmchatvercel.md
- xAI Grok Chat Model: integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmchatxaigrok.md
- ZenMux Chat Model: integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmchatzenmux.md
- Cohere Model: integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmcohere.md
- Ollama Model:
- Ollama Model: integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmollama/index.md
Expand Down Expand Up @@ -999,6 +1000,7 @@ nav:
- integrations/builtin/credentials/openai.md
- integrations/builtin/credentials/opencti.md
- integrations/builtin/credentials/openrouter.md
- integrations/builtin/credentials/zenmux.md
- integrations/builtin/credentials/openweathermap.md
- integrations/builtin/credentials/oracledb.md
- integrations/builtin/credentials/oura.md
Expand Down