Skip to content

Commit 71181f4

Browse files
replace v2.4.2 links to main in troubleshoot docs
1 parent 0a984ed commit 71181f4

6 files changed

+6
-6
lines changed

docsrc/source_troubleshoot/index.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,6 +48,6 @@ If quantization accuracy of your model does not improve after reading Judgeable
4848

4949
References
5050
============================================
51-
[1] `Quantization Troubleshooting for MCT <https://github.com/SonySemiconductorSolutions/mct-model-optimization/blob/v2.4.2/quantization_troubleshooting.md>`_
51+
[1] `Quantization Troubleshooting for MCT <https://github.com/SonySemiconductorSolutions/mct-model-optimization/blob/main/quantization_troubleshooting.md>`_
5252

5353
[2] `PyTorch documentation (v2.5) <https://docs.pytorch.org/docs/2.5/index.html>`_

docsrc/source_troubleshoot/troubleshoots/enabling_hessian-based_mixed_precision.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ Enabling Hessian-based Mixed Precision
88
============================================
99
In Mixed Precision quantization, MCT will assign a different bit width to each weight in the model, depending on the weight's layer sensitivity and a resource constraint defined by the user, such as target model size.
1010

11-
Check out the `Mixed Precision tutorial <https://github.com/SonySemiconductorSolutions/mct-model-optimization/blob/v2.4.2/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_mixed_precision_ptq.ipynb>`_ for more information.
11+
Check out the `Mixed Precision tutorial <https://github.com/SonySemiconductorSolutions/mct-model-optimization/blob/main/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_mixed_precision_ptq.ipynb>`_ for more information.
1212

1313
Overview
1414
==============================

docsrc/source_troubleshoot/troubleshoots/gptq-gradient_based_post_training_quantization.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ When PTQ (either with or without Mixed Precision) fails to deliver the required
1313

1414
In GPTQ, MCT will finetune the model's weights and quantization parameters for improved accuracy. The finetuning process will only use the label-less representative dataset.
1515

16-
Check out the `GPTQ tutorial <https://github.com/SonySemiconductorSolutions/mct-model-optimization/blob/v2.4.2/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_mobilenet_gptq.ipynb>`_ for more information and an implementation example.
16+
Check out the `GPTQ tutorial <https://github.com/SonySemiconductorSolutions/mct-model-optimization/blob/main/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_mobilenet_gptq.ipynb>`_ for more information and an implementation example.
1717

1818
Solution
1919
=================================

docsrc/source_troubleshoot/troubleshoots/mixed_precision_with_model_output_loss_objective.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ Mixed Precision with model output loss objective
88
=================================================
99
In Mixed Precision quantization, MCT will assign a different bitwidth to each weight in the model, depending on the weight's layer sensitivity and a resource constraint defined by the user, such as target model size.
1010

11-
Check out the `Mixed Precision tutorial <https://github.com/SonySemiconductorSolutions/mct-model-optimization/blob/v2.4.2/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_mixed_precision_ptq.ipynb>`_ for more information.
11+
Check out the `Mixed Precision tutorial <https://github.com/SonySemiconductorSolutions/mct-model-optimization/blob/main/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_mixed_precision_ptq.ipynb>`_ for more information.
1212

1313
Overview
1414
==============================

docsrc/source_troubleshoot/troubleshoots/shift_negative_activation.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,4 +48,4 @@ Set ``shift_negative_activation_correction`` to True in the ``QuantizationConfig
4848
.. note::
4949

5050
After activating this flag, you have a few more tweaks to its operation that you can control with the ``shift_negative_ratio``, ``shift_negative_threshold_recalculation`` & ``shift_negative_params_search`` flags.
51-
Read all about them in the `quantization configuration <https://github.com/SonySemiconductorSolutions/mct-model-optimization/blob/v2.4.2/model_compression_toolkit/core/common/quantization/quantization_config.py#L97-L99>`_ class description.
51+
Read all about them in the `quantization configuration <https://github.com/SonySemiconductorSolutions/mct-model-optimization/blob/main/model_compression_toolkit/core/common/quantization/quantization_config.py>`_ class description.

docsrc/source_troubleshoot/troubleshoots/using_more_samples_in_mixed_precision_quantization.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ Using more samples in Mixed Precision quantization
88
===================================================
99
In Mixed Precision quantization, MCT will assign a different bit width to each weight in the model, depending on the weight's layer sensitivity and a resource constraint defined by the user, such as target model size.
1010

11-
Check out the `mixed precision tutorial <https://github.com/SonySemiconductorSolutions/mct-model-optimization/blob/v2.4.2/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_mixed_precision_ptq.ipynb>`_ for more information.
11+
Check out the `mixed precision tutorial <https://github.com/SonySemiconductorSolutions/mct-model-optimization/blob/main/tutorials/notebooks/mct_features_notebooks/pytorch/example_pytorch_mixed_precision_ptq.ipynb>`_ for more information.
1212

1313
Overview
1414
==============================

0 commit comments

Comments
 (0)