Skip to content

Commit 0bb4be3

Browse files
committed
zh translation
1 parent a631e73 commit 0bb4be3

File tree

2 files changed

+70
-70
lines changed

2 files changed

+70
-70
lines changed

docs/zh/index.html

Lines changed: 69 additions & 69 deletions
Original file line numberDiff line numberDiff line change
@@ -7,20 +7,20 @@
77

88
<meta name="twitter:card" content="summary"/>
99
<meta name="twitter:image:src" content="https://avatars1.githubusercontent.com/u/64068543?s=400&amp;v=4"/>
10-
<meta name="twitter:title" content="Annotated Research Paper Implementations: Transformers, StyleGAN, Stable Diffusion, DDPM/DDIM, LayerNorm, Nucleus Sampling and more"/>
10+
<meta name="twitter:title" content="labml.ai 带注释的 PyTorch 版论文实现"/>
1111
<meta name="twitter:description" content=""/>
1212
<meta name="twitter:site" content="@labmlai"/>
1313
<meta name="twitter:creator" content="@labmlai"/>
1414

1515
<meta property="og:url" content="https://nn.labml.ai/index.html"/>
16-
<meta property="og:title" content="Annotated Research Paper Implementations: Transformers, StyleGAN, Stable Diffusion, DDPM/DDIM, LayerNorm, Nucleus Sampling and more"/>
16+
<meta property="og:title" content="labml.ai 带注释的 PyTorch 版论文实现"/>
1717
<meta property="og:image" content="https://avatars1.githubusercontent.com/u/64068543?s=400&amp;v=4"/>
18-
<meta property="og:site_name" content="Annotated Research Paper Implementations: Transformers, StyleGAN, Stable Diffusion, DDPM/DDIM, LayerNorm, Nucleus Sampling and more"/>
18+
<meta property="og:site_name" content="labml.ai 带注释的 PyTorch 版论文实现"/>
1919
<meta property="og:type" content="object"/>
20-
<meta property="og:title" content="Annotated Research Paper Implementations: Transformers, StyleGAN, Stable Diffusion, DDPM/DDIM, LayerNorm, Nucleus Sampling and more"/>
20+
<meta property="og:title" content="labml.ai 带注释的 PyTorch 版论文实现"/>
2121
<meta property="og:description" content=""/>
2222

23-
<title>Annotated Research Paper Implementations: Transformers, StyleGAN, Stable Diffusion, DDPM/DDIM, LayerNorm, Nucleus Sampling and more</title>
23+
<title>labml.ai 带注释的 PyTorch 版论文实现</title>
2424
<link rel="shortcut icon" href="/icon.png"/>
2525
<link rel="stylesheet" href="./pylit.css?v=1">
2626
<link rel="canonical" href="https://nn.labml.ai/index.html"/>
@@ -69,101 +69,101 @@
6969
<div class='section-link'>
7070
<a href='#section-0'>#</a>
7171
</div>
72-
<h1><a href="index.html">Annotated Research Paper Implementations: Transformers, StyleGAN, Stable Diffusion, DDPM/DDIM, LayerNorm, Nucleus Sampling and more</a></h1>
73-
<p>这是神经网络和相关算法的简单 PyTorch 实现的集合。<a href="https://github.com/labmlai/annotated_deep_learning_paper_implementations">这些实现</a>与解释一起记录,<a href="index.html">网站将这些内容</a>呈现为并排格式的注释。我们相信这些将帮助您更好地理解这些算法</p>
72+
<h1><a href="index.html">labml.ai 带注释的 PyTorch 版论文实现</a></h1>
73+
<p>这是一个用 PyTorch 实现各种神经网络和相关算法的集合。每个算法的<a href="https://github.com/labmlai/annotated_deep_learning_paper_implementations">代码实现</a>都有详细的解释说明,且在<a href="index.html">网站</a>上与代码逐行对应。我们相信,这些内容将帮助您更好地理解这些算法</p>
7474
<p><img alt="Screenshot" src="dqn-light.png"></p>
75-
<p>我们正在积极维护这个仓库并添加新的实现。<a href="https://twitter.com/labmlai"><img alt="Twitter" src="https://img.shields.io/twitter/follow/labmlai?style=social"></a>以获取更新。</p>
75+
<p>我们正在积极维护这个仓库并添加新的代码实现<a href="https://twitter.com/labmlai"><img alt="Twitter" src="https://img.shields.io/twitter/follow/labmlai?style=social"></a>以获取更新。</p>
7676
<h2>翻译</h2>
7777
<h3><strong><a href="https://nn.labml.ai">英语(原版)</a></strong></h3>
7878
</a><h3><strong><a href="https://nn.labml.ai/zh/">中文(翻译)</strong></h3>
79-
</a><h3><strong><a href="https://nn.labml.ai/ja/">日语(已翻译</strong></h3>
80-
<h2>纸质实现</h2>
81-
<h4><a href="transformers/index.html">变形金刚</a></h4>
82-
<ul><li><a href="transformers/mha.html">多头关注</a></li>
83-
<li><a href="transformers/models.html">变压器积木</a></li>
84-
<li><a href="transformers/xl/index.html">变压器 XL</a></li>
85-
<li><a href="transformers/xl/relative_mha.html">相对多头的注意力</a></li>
86-
<li><a href="transformers/rope/index.html">旋转位置嵌入 (ROPE)</a></li>
87-
<li><a href="transformers/alibi/index.html">注意线性偏差 (AliBI)</a></li>
88-
<li><a href="transformers/retro/index.html">复古</a></li>
89-
<li><a href="transformers/compressive/index.html">压缩变压器</a></li>
79+
</a><h3><strong><a href="https://nn.labml.ai/ja/">日语(翻译</strong></h3>
80+
<h2>论文实现</h2>
81+
<h4><a href="transformers/index.html">Transformers</a></h4>
82+
<ul><li><a href="transformers/mha.html">多头注意力</a></li>
83+
<li><a href="transformers/models.html">Transformer 构建模块</a></li>
84+
<li><a href="transformers/xl/index.html">Transformer XL</a></li>
85+
<li><a href="transformers/xl/relative_mha.html">相对多头注意力</a></li>
86+
<li><a href="transformers/rope/index.html">旋转式位置编码 (ROPE)</a></li>
87+
<li><a href="transformers/alibi/index.html">线性偏差注意力 (AliBI)</a></li>
88+
<li><a href="transformers/retro/index.html">RETRO</a></li>
89+
<li><a href="transformers/compressive/index.html">压缩 Transformer</a></li>
9090
<li><a href="transformers/gpt/index.html">GPT 架构</a></li>
9191
<li><a href="transformers/glu_variants/simple.html">GLU 变体</a></li>
92-
<li><a href="transformers/knn/index.html">knn-LM:通过记忆进行泛化</a></li>
93-
<li><a href="transformers/feedback/index.html">反馈变压器</a></li>
94-
<li><a href="transformers/switch/index.html">开关变压器</a></li>
95-
<li><a href="transformers/fast_weights/index.html">快速重量变压器</a></li>
92+
<li><a href="transformers/knn/index.html">kNN-LM:通过记忆实现泛化</a></li>
93+
<li><a href="transformers/feedback/index.html">自反馈 Transformer</a></li>
94+
<li><a href="transformers/switch/index.html">Switch Transformer</a></li>
95+
<li><a href="transformers/fast_weights/index.html">快速权重 Transformer</a></li>
9696
<li><a href="transformers/fnet/index.html">FNet</a></li>
97-
<li><a href="transformers/aft/index.html">免注意变压器</a></li>
98-
<li><a href="transformers/mlm/index.html">屏蔽语言模型</a></li>
99-
<li><a href="transformers/mlp_mixer/index.html">MLP 混音器:面向视觉的全 MLP 架构</a></li>
100-
<li><a href="transformers/gmlp/index.html">注意 MLP (gMLP)</a></li>
101-
<li><a href="transformers/vit/index.html">视觉变压器 (ViT)</a></li>
97+
<li><a href="transformers/aft/index.html">无注意力 Transformer</a></li>
98+
<li><a href="transformers/mlm/index.html">掩码语言模型</a></li>
99+
<li><a href="transformers/mlp_mixer/index.html">MLP-Mixer:一种用于视觉的全 MLP 架构</a></li>
100+
<li><a href="transformers/gmlp/index.html">门控多层感知器 (gMLP)</a></li>
101+
<li><a href="transformers/vit/index.html">视觉 Transformer (ViT)</a></li>
102102
<li><a href="transformers/primer_ez/index.html">Primer</a></li>
103-
<li><a href="transformers/hour_glass/index.html">沙漏</a></li></ul>
103+
<li><a href="transformers/hour_glass/index.html">沙漏网络</a></li></ul>
104104
<h4><a href="neox/index.html">Eleuther GPT-neox</a></h4>
105-
<li><a href="neox/samples/generate.html"> 48GB GPU 上生成</a></li> <ul>
106-
<li><a href="neox/samples/finetune.html">两个 48GB GPU 上的 Finetune</a></li>
105+
<li><a href="neox/samples/generate.html">在一块 48GB GPU 上生成</a></li> <ul>
106+
<li><a href="neox/samples/finetune.html">在两块 48GB GPU 上微调</a></li>
107107
<li><a href="neox/utils/llm_int8.html">llm.int8 ()</a></li></ul>
108108
<h4><a href="diffusion/index.html">扩散模型</a></h4>
109109
<ul><li><a href="diffusion/ddpm/index.html">去噪扩散概率模型 (DDPM)</a></li>
110-
<li><a href="diffusion/stable_diffusion/sampler/ddim.html">降噪扩散隐含模型 (DDIM)</a></li>
110+
<li><a href="diffusion/stable_diffusion/sampler/ddim.html">去噪扩散隐式模型 (DDIM)</a></li>
111111
<li><a href="diffusion/stable_diffusion/latent_diffusion.html">潜在扩散模型</a></li>
112-
<li><a href="diffusion/stable_diffusion/index.html">稳定的扩散</a></li></ul>
112+
<li><a href="diffusion/stable_diffusion/index.html">Stable Diffusion</a></li></ul>
113113
<h4><a href="gan/index.html">生成对抗网络</a></h4>
114-
<ul><li><a href="gan/original/index.html">原装 GAN</a></li>
115-
<li><a href="gan/dcgan/index.html">具有深度卷积网络的 GAN</a></li>
116-
<li><a href="gan/cycle_gan/index.html">循环增益</a></li>
114+
<ul><li><a href="gan/original/index.html">原始 GAN</a></li>
115+
<li><a href="gan/dcgan/index.html">使用深度卷积网络的 GAN</a></li>
116+
<li><a href="gan/cycle_gan/index.html">循环 GAN</a></li>
117117
<li><a href="gan/wasserstein/index.html">Wasserstein GAN</a></li>
118-
<li><a href="gan/wasserstein/gradient_penalty/index.html">Wasserstein GAN 带梯度惩罚</a></li>
118+
<li><a href="gan/wasserstein/gradient_penalty/index.html">具有梯度惩罚的 Wasserstein GAN</a></li>
119119
<li><a href="gan/stylegan/index.html">StyleGan 2</a></li></ul>
120-
<h4><a href="recurrent_highway_networks/index.html">循环高速公路网络</a></h4>
120+
<h4><a href="recurrent_highway_networks/index.html">循环高速路网络</a></h4>
121121
<h4><a href="lstm/index.html">LSTM</a></h4>
122-
<h4><a href="hypernetworks/hyper_lstm.html">超级网络-HyperLSTM</a></h4>
122+
<h4><a href="hypernetworks/hyper_lstm.html">超网络-HyperLSTM</a></h4>
123123
<h4><a href="resnet/index.html">ResNet</a></h4>
124-
<h4><a href="conv_mixer/index.html">混音器</a></h4>
124+
<h4><a href="conv_mixer/index.html">ConvMixer</a></h4>
125125
<h4><a href="capsule_networks/index.html">胶囊网络</a></h4>
126126
<h4><a href="unet/index.html">U-Net</a></h4>
127-
<h4><a href="sketch_rnn/index.html">素描 RNN</a></h4>
128-
<h4>图形神经网络</h4>
129-
<ul><li><a href="graphs/gat/index.html">图关注网络 (GAT)</a></li>
130-
<li><a href="graphs/gatv2/index.html">Graph 注意力网络 v2 (GATv2)</a></li></ul>
127+
<h4><a href="sketch_rnn/index.html">Sketch RNN</a></h4>
128+
<h4>图神经网络</h4>
129+
<ul><li><a href="graphs/gat/index.html">图注意力网络 (GAT)</a></li>
130+
<li><a href="graphs/gatv2/index.html">图注意力网络 v2 (GATv2)</a></li></ul>
131131
<h4><a href="rl/index.html">强化学习</a></h4>
132-
<li>基于<a href="rl/ppo/gae.html">广义<a href="rl/ppo/index.html">优势估计的近端策略</a></a></li> <ul>
133-
D@@ <li><a href="rl/dqn/index.html">eep Q Network</a> s 带有<a href="rl/dqn/model.html">决斗网络</a><a href="rl/dqn/replay_buffer.html">优先重播</a>和 Double Q Network。</li></ul>
134-
<h4><a href="cfr/index.html">反事实遗憾最小化(CFR)</a></h4>
135-
<p>使用CFR解决信息不完整的游戏,例如使用CFR的扑克。</p>
132+
<ul><li><a href="rl/ppo/index.html">近端策略优化</a><a href="rl/ppo/gae.html">广义优势估计</a></li>
133+
<li>具有<a href="rl/dqn/model.html">对抗网络</a><a href="rl/dqn/replay_buffer.html">优先回放 </a>和双 Q 网络的<a href="rl/dqn/index.html">深度 Q 网络</a></li></ul>
134+
<h4><a href="cfr/index.html">虚拟遗憾最小化(CFR)</a></h4>
135+
<p>使用 CFR 解决诸如扑克等不完全信息游戏</p>
136136
<ul><li><a href="cfr/kuhn/index.html">库恩扑克</a></li></ul>
137137
<h4><a href="optimizers/index.html">优化器</a></h4>
138-
<ul><li><a href="optimizers/adam.html">Adam</a> </li>
139-
<li><a href="optimizers/amsgrad.html">AMSGrad</a> </li>
140-
<li><a href="optimizers/adam_warmup.html">Adam Optimizer with warmup</a> </li>
141-
<li><a href="optimizers/noam.html">Noam Optimizer</a> </li>
142-
<li><a href="optimizers/radam.html">Rectified Adam Optimizer</a> </li>
143-
<li><a href="optimizers/ada_belief.html">AdaBelief Optimizer</a> </li>
138+
<ul><li><a href="optimizers/adam.html">Adam 优化器</a></li>
139+
<li><a href="optimizers/amsgrad.html">AMSGrad 优化器</a></li>
140+
<li><a href="optimizers/adam_warmup.html">具有预热的 Adam 优化器</a></li>
141+
<li><a href="optimizers/noam.html">Noam 优化器</a></li>
142+
<li><a href="optimizers/radam.html">RAdam 优化器</a></li>
143+
<li><a href="optimizers/ada_belief.html">AdaBelief 优化器</a></li>
144144
<li><a href="optimizers/sophia.html">Sophia-G Optimizer</a></li></ul>
145-
<h4><a href="normalization/index.html">规范化层</a></h4>
146-
<ul><li><a href="normalization/batch_norm/index.html">批量标准化</a></li>
147-
<li><a href="normalization/layer_norm/index.html">层规范化</a></li>
148-
<li><a href="normalization/instance_norm/index.html">实例规范化</a></li>
149-
<li><a href="normalization/group_norm/index.html">群组规范化</a></li>
150-
<li><a href="normalization/weight_standardization/index.html">重量标准化</a></li>
151-
<li><a href="normalization/batch_channel_norm/index.html">批量信道规范化</a></li>
152-
<li><a href="normalization/deep_norm/index.html">深度规范</a></li></ul>
145+
<h4><a href="normalization/index.html">归一化层</a></h4>
146+
<ul><li><a href="normalization/batch_norm/index.html">批量归一化</a></li>
147+
<li><a href="normalization/layer_norm/index.html">层归一化</a></li>
148+
<li><a href="normalization/instance_norm/index.html">实例归一化</a></li>
149+
<li><a href="normalization/group_norm/index.html">组归一化</a></li>
150+
<li><a href="normalization/weight_standardization/index.html">权重标准化</a></li>
151+
<li><a href="normalization/batch_channel_norm/index.html">批-通道归一化</a></li>
152+
<li><a href="normalization/deep_norm/index.html">DeepNorm</a></li></ul>
153153
<h4><a href="distillation/index.html">蒸馏</a></h4>
154154
<h4><a href="adaptive_computation/index.html">自适应计算</a></h4>
155155
<ul><li><a href="adaptive_computation/ponder_net/index.html">PonderNet</a></li></ul>
156156
<h4><a href="uncertainty/index.html">不确定性</a></h4>
157-
<ul><li><a href="uncertainty/evidence/index.html">用于量化分类不确定性的证据性深度学习</a></li></ul>
158-
<h4><a href="activations/index.html">激活</a></h4>
159-
<ul><li><a href="activations/fta/index.html">模糊平铺激活</a></li></ul>
157+
<ul><li><a href="uncertainty/evidence/index.html">用于量化分类不确定性的证据深度学习</a></li></ul>
158+
<h4><a href="activations/index.html">激活函数</a></h4>
159+
<ul><li><a href="activations/fta/index.html">模糊平铺激活函数</a></li></ul>
160160
<h4><a href="sampling/index.html">语言模型采样技术</a></h4>
161161
<ul><li><a href="sampling/greedy.html">贪婪采样</a></li>
162162
<li><a href="sampling/temperature.html">温度采样</a></li>
163-
<li><a href="sampling/top_k.html">前 k 个采样</a></li>
164-
<li><a href="sampling/nucleus.html">原子核采样</a></li></ul>
165-
<h4><a href="scaling/index.html">可扩展的训练/推理</a></h4>
166-
<ul><li><a href="scaling/zero3/index.html">Zero3 内存优化</a></li></ul>
163+
<li><a href="sampling/top_k.html">Top-K 采样</a></li>
164+
<li><a href="sampling/nucleus.html">核采样</a></li></ul>
165+
<h4><a href="scaling/index.html">可扩展训练/推理</a></h4>
166+
<ul><li><a href="scaling/zero3/index.html">ZeRO-3 内存优化</a></li></ul>
167167
<h3>安装</h3>
168168
<pre class="highlight lang-bash"><code><span></span>pip<span class="w"> </span>install<span class="w"> </span>labml-nn</code></pre>
169169

0 commit comments

Comments
 (0)