Skip to content

Commit cc9f38b

Browse files
Update optimizer descriptions in quantization_facade.py for Keras and PyTorch
1 parent e8bc730 commit cc9f38b

File tree

5 files changed

+9
-9
lines changed

5 files changed

+9
-9
lines changed

docs/api/api_docs/methods/get_keras_gptq_config.html

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -51,8 +51,8 @@ <h3>Navigation</h3>
5151
<dt class="field-even">Parameters<span class="colon">:</span></dt>
5252
<dd class="field-even"><ul class="simple">
5353
<li><p><strong>n_epochs</strong> (<em>int</em>) – Number of epochs for running the representative dataset for fine-tuning.</p></li>
54-
<li><p><strong>optimizer</strong> (<em>OptimizerV2</em>) – Keras optimizer to use for fine-tuning for auxiliry variable with a default learning rate set to 0.2.</p></li>
55-
<li><p><strong>optimizer_rest</strong> (<em>OptimizerV2</em>) – Keras optimizer to use for fine-tuning of the bias variable.</p></li>
54+
<li><p><strong>optimizer</strong> (<em>OptimizerV2</em>) – Keras optimizer to use for fine-tuning for auxiliary variable. Default: Adam(learning rate set to 3e-2).</p></li>
55+
<li><p><strong>optimizer_rest</strong> (<em>OptimizerV2</em>) – Keras optimizer to use for fine-tuning of the bias variable. Default: Adam(learning rate set to 1e-4).</p></li>
5656
<li><p><strong>loss</strong> (<em>Callable</em>) – loss to use during fine-tuning. should accept 4 lists of tensors. 1st list of quantized tensors, the 2nd list is the float tensors, the 3rd is a list of quantized weights and the 4th is a list of float weights.</p></li>
5757
<li><p><strong>log_function</strong> (<em>Callable</em>) – Function to log information about the gptq process.</p></li>
5858
<li><p><strong>use_hessian_based_weights</strong> (<em>bool</em>) – Whether to use Hessian-based weights for weighted average loss.</p></li>

docs/api/api_docs/methods/get_pytroch_gptq_config.html

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -51,8 +51,8 @@ <h3>Navigation</h3>
5151
<dt class="field-even">Parameters<span class="colon">:</span></dt>
5252
<dd class="field-even"><ul class="simple">
5353
<li><p><strong>n_epochs</strong> (<em>int</em>) – Number of epochs for running the representative dataset for fine-tuning.</p></li>
54-
<li><p><strong>optimizer</strong> (<em>Optimizer</em>) – Pytorch optimizer to use for fine-tuning for auxiliry variable.</p></li>
55-
<li><p><strong>optimizer_rest</strong> (<em>Optimizer</em>) – Pytorch optimizer to use for fine-tuning of the bias variable.</p></li>
54+
<li><p><strong>optimizer</strong> (<em>Optimizer</em>) – Pytorch optimizer to use for fine-tuning for auxiliary variable. Default: Adam(learning rate set to 3e-2).</p></li>
55+
<li><p><strong>optimizer_rest</strong> (<em>Optimizer</em>) – Pytorch optimizer to use for fine-tuning of the bias variable. Default: Adam(learning rate set to 1e-4).</p></li>
5656
<li><p><strong>loss</strong> (<em>Callable</em>) – loss to use during fine-tuning. See the default loss function for the exact interface.</p></li>
5757
<li><p><strong>log_function</strong> (<em>Callable</em>) – Function to log information about the gptq process.</p></li>
5858
<li><p><strong>use_hessian_based_weights</strong> (<em>bool</em>) – Whether to use Hessian-based weights for weighted average loss.</p></li>

docs/searchindex.js

Lines changed: 1 addition & 1 deletion
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

model_compression_toolkit/gptq/keras/quantization_facade.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -75,8 +75,8 @@ def get_keras_gptq_config(n_epochs: int,
7575
7676
args:
7777
n_epochs (int): Number of epochs for running the representative dataset for fine-tuning.
78-
optimizer (OptimizerV2): Keras optimizer to use for fine-tuning for auxiliry variable with a default learning rate set to 0.2.
79-
optimizer_rest (OptimizerV2): Keras optimizer to use for fine-tuning of the bias variable.
78+
optimizer (OptimizerV2): Keras optimizer to use for fine-tuning for auxiliary variable. Default: Adam(learning rate set to 3e-2).
79+
optimizer_rest (OptimizerV2): Keras optimizer to use for fine-tuning of the bias variable. Default: Adam(learning rate set to 1e-4).
8080
loss (Callable): loss to use during fine-tuning. should accept 4 lists of tensors. 1st list of quantized tensors, the 2nd list is the float tensors, the 3rd is a list of quantized weights and the 4th is a list of float weights.
8181
log_function (Callable): Function to log information about the gptq process.
8282
use_hessian_based_weights (bool): Whether to use Hessian-based weights for weighted average loss.

model_compression_toolkit/gptq/pytorch/quantization_facade.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -69,8 +69,8 @@ def get_pytorch_gptq_config(n_epochs: int,
6969
7070
args:
7171
n_epochs (int): Number of epochs for running the representative dataset for fine-tuning.
72-
optimizer (Optimizer): Pytorch optimizer to use for fine-tuning for auxiliry variable.
73-
optimizer_rest (Optimizer): Pytorch optimizer to use for fine-tuning of the bias variable.
72+
optimizer (Optimizer): Pytorch optimizer to use for fine-tuning for auxiliary variable. Default: Adam(learning rate set to 3e-2).
73+
optimizer_rest (Optimizer): Pytorch optimizer to use for fine-tuning of the bias variable. Default: Adam(learning rate set to 1e-4).
7474
loss (Callable): loss to use during fine-tuning. See the default loss function for the exact interface.
7575
log_function (Callable): Function to log information about the gptq process.
7676
use_hessian_based_weights (bool): Whether to use Hessian-based weights for weighted average loss.

0 commit comments

Comments
 (0)