Skip to content

Commit 87bfbc3

Browse files
authored
[docs] UViT2D (#6643)
* uvit2d * fix * fix? * add correct paper * fix paths * update abstract
1 parent a517f66 commit 87bfbc3

File tree

2 files changed

+41
-0
lines changed

2 files changed

+41
-0
lines changed

docs/source/en/_toctree.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -228,6 +228,8 @@
228228
title: UNet3DConditionModel
229229
- local: api/models/unet-motion
230230
title: UNetMotionModel
231+
- local: api/models/uvit2d
232+
title: UViT2DModel
231233
- local: api/models/vq
232234
title: VQModel
233235
- local: api/models/autoencoderkl
Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License.
11+
-->
12+
13+
# UVit2DModel
14+
15+
The [U-ViT](https://hf.co/papers/2301.11093) model is a vision transformer (ViT) based UNet. This model incorporates elements from ViT (considers all inputs such as time, conditions and noisy image patches as tokens) and a UNet (long skip connections between the shallow and deep layers). The skip connection is important for predicting pixel-level features. An additional 3x3 convolutional block is applied prior to the final output to improve image quality.
16+
17+
The abstract from the paper is:
18+
19+
*Currently, applying diffusion models in pixel space of high resolution images is difficult. Instead, existing approaches focus on diffusion in lower dimensional spaces (latent diffusion), or have multiple super-resolution levels of generation referred to as cascades. The downside is that these approaches add additional complexity to the diffusion framework. This paper aims to improve denoising diffusion for high resolution images while keeping the model as simple as possible. The paper is centered around the research question: How can one train a standard denoising diffusion models on high resolution images, and still obtain performance comparable to these alternate approaches? The four main findings are: 1) the noise schedule should be adjusted for high resolution images, 2) It is sufficient to scale only a particular part of the architecture, 3) dropout should be added at specific locations in the architecture, and 4) downsampling is an effective strategy to avoid high resolution feature maps. Combining these simple yet effective techniques, we achieve state-of-the-art on image generation among diffusion models without sampling modifiers on ImageNet.*
20+
21+
## UVit2DModel
22+
23+
[[autodoc]] UVit2DModel
24+
25+
## UVit2DConvEmbed
26+
27+
[[autodoc]] models.unets.uvit_2d.UVit2DConvEmbed
28+
29+
## UVitBlock
30+
31+
[[autodoc]] models.unets.uvit_2d.UVitBlock
32+
33+
## ConvNextBlock
34+
35+
[[autodoc]] models.unets.uvit_2d.ConvNextBlock
36+
37+
## ConvMlmLayer
38+
39+
[[autodoc]] models.unets.uvit_2d.ConvMlmLayer

0 commit comments

Comments
 (0)