You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Custom decoding methods enable specialized generation behavior such as the following:
320
+
Custom generation methods enable specialized behavior such as:
321
321
- have the model continue thinking if it is uncertain;
322
322
- roll back generation if the model gets stuck;
323
323
- handle special tokens with custom logic;
324
-
-enhanced input preparation for advanced models;
324
+
-use specialized KV caches;
325
325
326
-
We enable custom decoding methods through model repositories, assuming a specific model tag and file structure (see subsection below). This feature is an extension of [custom modeling code](./models.md#custom-models) and, like such, requires setting `trust_remote_code=True`.
326
+
We enable custom generation methods through model repositories, assuming a specific model tag and file structure (see subsection below). This feature is an extension of [custom modeling code](./models.md#custom-models) and, like such, requires setting `trust_remote_code=True`.
327
327
328
-
If a model repository holds a custom decoding method, the easiest way to try it out is to load the model and generate with it:
328
+
If a model repository holds a custom generation method, the easiest way to try it out is to load the model and generate with it:
329
329
330
330
```py
331
331
from transformers import AutoModelForCausalLM, AutoTokenizer
332
332
333
333
# `transformers-community/custom_generate_example` holds a copy of `Qwen/Qwen2.5-0.5B-Instruct`, but
334
-
# with custom generation code -> calling `generate` uses the custom decoding method!
334
+
# with custom generation code -> calling `generate` uses the custom generation method!
'The quick brown fox jumps over a lazy dog, and the dog is a type of animal. Is'
346
346
```
347
347
348
-
Model repositories with custom decoding methods have a special property: their decoding method can be loaded from **any** model through [`~GenerationMixin.generate`]'s `custom_generate` argument. This means anyone can create and share their custom generation method to potentially work with any Transformers model, without requiring users to install additional Python packages.
348
+
Model repositories with custom generation methods have a special property: their generation method can be loaded from **any** model through [`~GenerationMixin.generate`]'s `custom_generate` argument. This means anyone can create and share their custom generation method to potentially work with any Transformers model, without requiring users to install additional Python packages.
349
349
350
350
```py
351
351
from transformers import AutoModelForCausalLM, AutoTokenizer
You should read the `README.md` file of the repository containing the custom generation strategy to see what the new arguments and output type differences are, if they exist. Otherwise, you can assume it works like the base [`~GenerationMixin.generate`] method.
365
365
366
366
> [!TIP]
367
-
> You can find all custom decoding methods by [searching for their custom tag.](https://huggingface.co/models?other=custom_generate), `custom_generate`
367
+
> You can find all custom generation methods by [searching for their custom tag.](https://huggingface.co/models?other=custom_generate), `custom_generate`.
368
368
369
369
Consider the Hub repository [transformers-community/custom_generate_example](https://huggingface.co/transformers-community/custom_generate_example) as an example. The `README.md` states that it has an additional input argument, `left_padding`, which adds a number of padding tokens before the prompt.
Updating your Python requirements accordingly will remove this error message.
389
389
390
-
### Creating a custom decoding method
390
+
### Creating a custom generation method
391
391
392
-
To create a new decoding method, you need to create a new [**Model**](https://huggingface.co/new) repository and push a few files into it.
393
-
1. The model you've designed your decoding method with.
394
-
2.`custom_generate/generate.py`, which contains all the logic for your custom decoding method.
392
+
To create a new generation method, you need to create a new [**Model**](https://huggingface.co/new) repository and push a few files into it.
393
+
1. The model you've designed your generation method with.
394
+
2.`custom_generate/generate.py`, which contains all the logic for your custom generation method.
395
395
3.`custom_generate/requirements.txt`, used to optionally add new Python requirements and/or lock specific versions to correctly use your method.
396
396
4.`README.md`, where you should add the `custom_generate` tag and document any new arguments or output type differences of your custom method here.
397
397
@@ -409,7 +409,7 @@ your_repo/
409
409
410
410
#### Adding the base model
411
411
412
-
The starting point for your custom decoding method is a model repository just like any other. The model to add to this repository should be the model you've designed your method with, and it is meant to be part of a working self-contained model-generate pair. When the model in this repository is loaded, your custom decoding method will override `generate`. Don't worry -- your decoding method can still be loaded with any other Transformers model, as explained in the section above.
412
+
The starting point for your custom generation method is a model repository just like any other. The model to add to this repository should be the model you've designed your method with, and it is meant to be part of a working self-contained model-generate pair. When the model in this repository is loaded, your custom generation method will override `generate`. Don't worry -- your generation method can still be loaded with any other Transformers model, as explained in the section above.
413
413
414
414
If you simply want to copy an existing model, you can do
415
415
@@ -418,13 +418,13 @@ from transformers import AutoModelForCausalLM, AutoTokenizer
This is the core of your decoding method. It *must* contain a method named `generate`, and this method *must* contain a `model` argument as its first argument. `model` is the model instance, which means you have access to all attributes and methods in the model, including the ones defined in [`GenerationMixin`] (like the base `generate` method).
427
+
This is the core of your generation method. It *must* contain a method named `generate`, and this method *must* contain a `model` argument as its first argument. `model` is the model instance, which means you have access to all attributes and methods in the model, including the ones defined in [`GenerationMixin`] (like the base `generate` method).
428
428
429
429
> [!WARNING]
430
430
> `generate.py` must be placed in a folder named `custom_generate`, and not at the root level of the repository. The file paths for this feature are hardcoded.
Follow the recommended practices below to ensure your custom decoding method works as expected.
468
+
Follow the recommended practices below to ensure your custom generation method works as expected.
469
469
- Feel free to reuse the logic for validation and input preparation in the original [`~GenerationMixin.generate`].
470
470
- Pin the `transformers` version in the requirements if you use any private method/attribute in `model`.
471
471
- Consider adding model validation, input validation, or even a separate test file to help users sanity-check your code in their environment.
@@ -476,7 +476,7 @@ Your custom `generate` method can relative import code from the `custom_generate
476
476
from .utils import some_function
477
477
```
478
478
479
-
Only relative imports from the same-level `custom_generate` folder are supported. Parent/sibling folder imports are not valid. The `custom_generate` argument also works locally with any directory that contains a `custom_generate` structure. This is the recommended workflow for developing your custom decoding method.
479
+
Only relative imports from the same-level `custom_generate` folder are supported. Parent/sibling folder imports are not valid. The `custom_generate` argument also works locally with any directory that contains a `custom_generate` structure. This is the recommended workflow for developing your custom generation method.
480
480
481
481
482
482
#### requirements.txt
@@ -485,7 +485,7 @@ You can optionally specify additional Python requirements in a `requirements.txt
485
485
486
486
#### README.md
487
487
488
-
The root level `README.md` in the model repository usually describes the model therein. However, since the focus of the repository is the custom decoding method, we highly recommend to shift its focus towards describing the custom decoding method. In addition to a description of the method, we recommend documenting any input and/or output differences to the original [`~GenerationMixin.generate`]. This way, users can focus on what's new, and rely on Transformers docs for generic implementation details.
488
+
The root level `README.md` in the model repository usually describes the model therein. However, since the focus of the repository is the custom generation method, we highly recommend to shift its focus towards describing the custom generation method. In addition to a description of the method, we recommend documenting any input and/or output differences to the original [`~GenerationMixin.generate`]. This way, users can focus on what's new, and rely on Transformers docs for generic implementation details.
489
489
490
490
For discoverability, we highly recommend you to add the `custom_generate` tag to your repository. To do so, the top of your `README.md` file should look like the example below. After you push the file, you should see the tag in your repository!
491
491
@@ -504,6 +504,11 @@ Recommended practices:
504
504
- Add self-contained examples to enable quick experimentation.
505
505
- Describe soft-requirements such as if the method only works well with a certain family of models.
506
506
507
+
### Finding custom generation methods
508
+
509
+
You can find all custom generation methods by [searching for their custom tag.](https://huggingface.co/models?other=custom_generate), `custom_generate`. In addition to the tag, we curate two collections of `custom_generate` methods:
510
+
-[Custom generation methods - Community](https://huggingface.co/collections/transformers-community/custom-generation-methods-community-6888fb1da0efbc592d3a8ab6) -- a collection of powerful methods contributed by the community;
511
+
-[Custom generation methods - Tutorials](https://huggingface.co/collections/transformers-community/custom-generation-methods-tutorials-6823589657a94940ea02cfec) -- a collection of reference implementations for methods that previously were part of `transformers`, as well as tutorials for `custom_generate`.
Copy file name to clipboardExpand all lines: docs/source/en/model_doc/glm4_moe.md
+15-1Lines changed: 15 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,7 +18,21 @@ rendered properly in your Markdown viewer.
18
18
19
19
## Overview
20
20
21
-
This will update After model release.
21
+
The [**GLM-4.5**](https://arxiv.org/abs/2508.06471) series models are foundation models designed for intelligent agents, MoE variants are documented here as Glm4Moe.
22
+
23
+
GLM-4.5 has **355** billion total parameters with **32** billion active parameters, while GLM-4.5-Air adopts a more compact design with **106** billion total parameters and **12** billion active parameters. GLM-4.5 models unify reasoning, coding, and intelligent agent capabilities to meet the complex demands of intelligent agent applications.
24
+
25
+
Both GLM-4.5 and GLM-4.5-Air are hybrid reasoning models that provide two modes: thinking mode for complex reasoning and tool usage, and non-thinking mode for immediate responses.
26
+
27
+
We have open-sourced the base models, hybrid reasoning models, and FP8 versions of the hybrid reasoning models for both GLM-4.5 and GLM-4.5-Air. They are released under the MIT open-source license and can be used commercially and for secondary development.
28
+
29
+
As demonstrated in our comprehensive evaluation across 12 industry-standard benchmarks, GLM-4.5 achieves exceptional performance with a score of **63.2**, in the **3rd** place among all the proprietary and open-source models. Notably, GLM-4.5-Air delivers competitive results at **59.8** while maintaining superior efficiency.
For more eval results, show cases, and technical details, please visit our [technical report](https://arxiv.org/abs/2508.06471) or [technical blog](https://z.ai/blog/glm-4.5).
34
+
35
+
The model code, tool parser and reasoning parser can be found in the implementation of [transformers](https://github.com/huggingface/transformers/tree/main/src/transformers/models/glm4_moe), [vLLM](https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/models/glm4_moe_mtp.py) and [SGLang](https://github.com/sgl-project/sglang/blob/main/python/sglang/srt/models/glm4_moe.py).
Copy file name to clipboardExpand all lines: docs/source/en/model_doc/glm4v_moe.md
+11-9Lines changed: 11 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,20 +25,22 @@ rendered properly in your Markdown viewer.
25
25
26
26
## Overview
27
27
28
-
The Glm4vMoe model was proposed in [<INSERTPAPERNAMEHERE>](<INSERT PAPER LINK HERE>) by <INSERTAUTHORSHERE>.
29
-
<INSERTSHORTSUMMARYHERE>
28
+
Vision-language models (VLMs) have become a key cornerstone of intelligent systems. As real-world AI tasks grow increasingly complex, VLMs urgently need to enhance reasoning capabilities beyond basic multimodal perception — improving accuracy, comprehensiveness, and intelligence — to enable complex problem solving, long-context understanding, and multimodal agents.
30
29
31
-
The abstract from the paper is the following:
30
+
Through our open-source work, we aim to explore the technological frontier together with the community while empowering more developers to create exciting and innovative applications.
32
31
33
-
*<INSERTPAPERABSTRACTHERE>*
32
+
[GLM-4.5V](https://github.com/zai-org/GLM-V) is based on ZhipuAI’s next-generation flagship text foundation model GLM-4.5-Air (106B parameters, 12B active). It continues the technical approach of [GLM-4.1V-Thinking](https://arxiv.org/abs/2507.01006), achieving SOTA performance among models of the same scale on 42 public vision-language benchmarks. It covers common tasks such as image, video, and document understanding, as well as GUI agent operations.
This model was contributed by [INSERT YOUR HF USERNAME HERE](https://huggingface.co/<INSERTYOURHFUSERNAMEHERE>).
40
-
The original code can be found [here](<INSERT LINK TO GITHUB REPO HERE>).
36
+
Beyond benchmark performance, GLM-4.5V focuses on real-world usability. Through efficient hybrid training, it can handle diverse types of visual content, enabling full-spectrum vision reasoning, including:
-**Complex chart & long document parsing** (research report analysis, information extraction)
41
+
-**Grounding** (precise visual element localization)
41
42
43
+
The model also introduces a **Thinking Mode** switch, allowing users to balance between quick responses and deep reasoning. This switch works the same as in the `GLM-4.5` language model.
['user\n\nWhat is shown in this image?\nassistant\nThere is a red stop sign in the image.\nuser\n\nWhat about this image? How many cats do you see?\nassistant\ntwo', 'user\n\nWhat is shown in this image?\nassistant\n']
208
+
['user\n\nWhat is shown in this image?\nassistant\nThere is a red stop sign in the image.\nuser\n\nWhat about this image? How many cats do you see?\nassistant\ntwo', 'user\n\nWhat is shown in this image?\nassistant\nThe image shows a whimsical scene of a snowman sitting by a campfire. The snowman is anthropomorphized, wearing a hat and']
208
209
```
209
210
210
211
### Video inference
@@ -312,10 +313,6 @@ model = LlavaOnevisionForConditionalGeneration.from_pretrained(
0 commit comments