Skip to content

Commit 791043f

Browse files
address reviews
1 parent 2d61c85 commit 791043f

File tree

4 files changed

+5
-5
lines changed

4 files changed

+5
-5
lines changed

guides/ipynb/quantization/overview.ipynb renamed to guides/ipynb/quantization_overview.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -160,7 +160,7 @@
160160
"\n",
161161
"* `Dense`\n",
162162
"* `EinsumDense`\n",
163-
"* `Embedding` (available in KerasHub)\n",
163+
"* `Embedding`\n",
164164
"* `ReversibleEmbedding` (available in KerasHub)\n",
165165
"\n",
166166
"Any composite layers that are built from the above (for example, `MultiHeadAttention`, `GroupedQueryAttention`, feed-forward blocks in Transformers) inherit quantization support by construction. This covers the majority of modern encoder-only and decoder-only Transformer architectures.\n",

guides/md/quantization/overview.md renamed to guides/md/quantization_overview.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
**Last modified:** 2025/10/09<br>
66
**Description:** Overview of quantization in Keras (int8, float8, int4, GPTQ).
77

8-
<img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/guides/ipynb/quantization/overview.ipynb) <span class="k-dot">•</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/guides/quantization/overview.py)
8+
<img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/guides/ipynb/quantization_overview.ipynb) <span class="k-dot">•</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/guides/quantization_overview.py)
99

1010
---
1111

@@ -135,7 +135,7 @@ Keras supports the following core layers in its quantization framework:
135135

136136
* `Dense`
137137
* `EinsumDense`
138-
* `Embedding` (available in KerasHub)
138+
* `Embedding`
139139
* `ReversibleEmbedding` (available in KerasHub)
140140

141141
Any composite layers that are built from the above (for example, `MultiHeadAttention`, `GroupedQueryAttention`, feed-forward blocks in Transformers) inherit quantization support by construction. This covers the majority of modern encoder-only and decoder-only Transformer architectures.

guides/quantization/overview.py renamed to guides/quantization_overview.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -127,7 +127,7 @@
127127
128128
* `Dense`
129129
* `EinsumDense`
130-
* `Embedding` (available in KerasHub)
130+
* `Embedding`
131131
* `ReversibleEmbedding` (available in KerasHub)
132132
133133
Any composite layers that are built from the above (for example, `MultiHeadAttention`, `GroupedQueryAttention`, feed-forward blocks in Transformers) inherit quantization support by construction. This covers the majority of modern encoder-only and decoder-only Transformer architectures.

scripts/guides_master.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -124,7 +124,7 @@
124124
"title": "Orbax Checkpointing in Keras",
125125
},
126126
{
127-
"path": "quantization/overview",
127+
"path": "quantization_overview",
128128
"title": "Quantization in Keras",
129129
},
130130
# {

0 commit comments

Comments
 (0)