Skip to content

Commit 1ef1d72

Browse files
Code quality improvements - typos, formatting, etc.
Signed-off-by: Keval Morabia <[email protected]>
1 parent c391942 commit 1ef1d72

File tree

137 files changed

+336
-340
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

137 files changed

+336
-340
lines changed

.github/ISSUE_TEMPLATE/1_bug_report.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -9,15 +9,12 @@ assignees: ''
99
## Describe the bug
1010
<!-- Description of what the bug is, its impact (blocker, should have, nice to have) and any stack traces or error messages. -->
1111

12-
1312
### Steps/Code to reproduce bug
1413
<!-- Please list *minimal* steps or code snippet for us to be able to reproduce the bug. -->
1514
<!-- A helpful guide on on how to craft a minimal bug report http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports. -->
1615

17-
1816
### Expected behavior
1917

20-
2118
## System information
2219

2320
- Container used (if applicable): ?
@@ -37,7 +34,6 @@ assignees: ''
3734
- TensorRT: ?
3835
- Any other details that may help: ?
3936

40-
4137
<details>
4238
<summary><b>Click to expand: Python script to automatically collect system information</b></summary>
4339

.github/ISSUE_TEMPLATE/2_feature_request.md

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -9,13 +9,10 @@ assignees: ''
99
### Detailed description of the requested feature
1010
<!-- Description of the feature being requested. Also provide any relevant information on what the feature will be used for -->
1111

12-
1312
### Timeline
1413
<!-- What time-frame do you need this feature by and what is the impact (blocker, should have, nice to have) of not having the feature -->
1514

16-
1715
### Describe alternatives you've considered
1816

19-
2017
### Target hardware/use case
2118
<!-- Target hardware/use case this feature will be used for -->

.github/PULL_REQUEST_TEMPLATE.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,6 @@
1414
## Testing
1515
<!-- Mention how have you tested your change if applicable. -->
1616

17-
1817
## Before your PR is "*Ready for review*"
1918
<!-- If you haven't finished some of the above items you can still open `Draft` PR. -->
2019

@@ -24,6 +23,5 @@
2423
- **Did you add or update any necessary documentation?**: Yes/No
2524
- **Did you update [Changelog](https://github.com/NVIDIA/TensorRT-Model-Optimizer/blob/main/CHANGELOG.rst)?**: Yes/No <!--- Only for new features, API changes, critical bug fixes or bw breaking changes. -->
2625

27-
2826
## Additional Information
2927
<!-- E.g. related issue. -->

.gitlab/tests.yml

Lines changed: 5 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,6 @@
11
# NOTE: Make sure this file is consistent with .github/workflows/{unit,gpu}_tests.yml
22
.tests-default:
33
stage: tests
4-
variables:
5-
PYTHON: 12
6-
TORCH: 28
74
rules:
85
- if: $JET_ONLY != null
96
when: never
@@ -14,25 +11,17 @@
1411
unit:
1512
extends: .tests-default
1613
timeout: 30m
14+
variables:
15+
PYTHON: 12
16+
TORCH: 28
17+
TRANSFORMERS: latest
1718
image: python:3.$PYTHON
1819
before_script:
1920
# Install cmake to build onnxsim from sdists for Python 3.12 until http://github.com/daquexian/onnx-simplifier/pull/353
2021
- if [ "$PYTHON" = "12" ]; then apt-get update && apt-get install -y cmake; fi
2122
- pip install tox
2223
script:
23-
- tox -e py3$PYTHON-torch$TORCH-unit
24-
25-
multi-py-unit:
26-
extends: unit
27-
parallel:
28-
matrix:
29-
- PYTHON: [10, 11]
30-
31-
multi-torch-unit:
32-
extends: unit
33-
parallel:
34-
matrix:
35-
- TORCH: [26, 27]
24+
- tox -e py3$PYTHON-torch$TORCH-tf_$TRANSFORMERS-unit
3625

3726
##### GPU Tests #####
3827
gpu:

.markdownlint-cli2.yaml

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
config:
2+
MD013: false # line-length
3+
MD024: false # no-duplicate-heading
4+
MD028: false # no-blanks-blockquote
5+
MD033: false # no-inline-html
6+
MD041: false # first-line-heading
7+
MD059: false # no-hard-tabs

.pre-commit-config.yaml

Lines changed: 22 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# NOTE: Make sure to update version in dev requirements (setup.py) as well!
22
repos:
33
- repo: https://github.com/pre-commit/pre-commit-hooks
4-
rev: v5.0.0
4+
rev: v6.0.0
55
hooks:
66
- id: check-added-large-files
77
args: [--maxkb=500, --enforce-all]
@@ -15,35 +15,24 @@ repos:
1515
- id: check-merge-conflict
1616
- id: check-symlinks
1717
- id: check-toml
18-
- id: check-yaml
19-
args: [--allow-multiple-documents]
20-
- id: debug-statements
21-
- id: end-of-file-fixer
2218
- id: mixed-line-ending
2319
args: [--fix=lf]
2420
- id: requirements-txt-fixer
25-
- id: trailing-whitespace
26-
27-
- repo: https://github.com/executablebooks/mdformat
28-
rev: 0.7.22
29-
hooks:
30-
- id: mdformat
31-
exclude: ^.github/
3221

3322
- repo: https://github.com/astral-sh/ruff-pre-commit
34-
rev: v0.11.9
23+
rev: v0.12.11
3524
hooks:
36-
- id: ruff
25+
- id: ruff-check
3726
args: [--fix, --exit-non-zero-on-fix]
3827
- id: ruff-format
3928

4029
- repo: https://github.com/pre-commit/mirrors-mypy
41-
rev: v1.15.0
30+
rev: v1.17.1
4231
hooks:
4332
- id: mypy
4433

4534
- repo: https://github.com/pre-commit/mirrors-clang-format
46-
rev: v20.1.0
35+
rev: v21.1.0
4736
hooks:
4837
- id: clang-format
4938
types_or: [c++, c, c#, cuda, java, javascript, objective-c, proto] # no json!
@@ -131,23 +120,33 @@ repos:
131120
- --allow-past-years
132121
types_or: [shell]
133122

134-
- repo: https://github.com/keith/pre-commit-buildifier
135-
rev: 8.0.3
136-
hooks:
137-
- id: buildifier
138-
- id: buildifier-lint
139-
140123
- repo: https://github.com/PyCQA/bandit
141124
rev: 1.7.9
142125
hooks:
143126
- id: bandit
144127
args: ["-c", "pyproject.toml", "-q"]
145128
additional_dependencies: ["bandit[toml]"]
146129

130+
- repo: https://github.com/DavidAnson/markdownlint-cli2
131+
rev: v0.18.1
132+
hooks:
133+
- id: markdownlint-cli2
134+
args: ["--fix"]
135+
136+
##### Manual hooks (Expect many false positives)
137+
# These hooks are only run with `pre-commit run --all-files --hook-stage manual <hook_id>`
138+
139+
# Spell checker
140+
- repo: https://github.com/crate-ci/typos
141+
rev: v1.35.8
142+
hooks:
143+
- id: typos
144+
stages: [manual]
145+
147146
# Link checker
148147
- repo: https://github.com/lycheeverse/lychee.git
149148
rev: v0.15.1
150149
hooks:
151150
- id: lychee
152151
args: ["--no-progress", "--exclude-loopback"]
153-
stages: [manual] # Only run with `pre-commit run --all-files --hook-stage manual lychee`
152+
stages: [manual]

CHANGELOG.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ Model Optimizer Changelog (Linux)
1818
- Add support for QAT with HuggingFace + DeepSpeed. See ``examples/gpt_oss`` for an example.
1919
- Add support for QAT with LoRA. The LoRA adapters can be folded into the base model after QAT and deployed just like a regular PTQ model. See ``examples/gpt_oss`` for an example.
2020
- ModelOpt provides convenient trainers such as :class:`QATTrainer`, :class:`QADTrainer`, :class:`KDTrainer`, :class:`QATSFTTrainer` which inherits from Huggingface trainers.
21-
ModelOpt trainers can be used as drop in replacement of the correspoding Huggingface trainer. See usage examples in ``examples/gpt_oss``, ``examples/llm_qat`` or ``examples/llm_distill``.
21+
ModelOpt trainers can be used as drop in replacement of the corresponding Huggingface trainer. See usage examples in ``examples/gpt_oss``, ``examples/llm_qat`` or ``examples/llm_distill``.
2222
- (Experimental) Add quantization support for custom TensorRT op in ONNX models.
2323
- Add support for Minifinetuning (MFT; https://arxiv.org/abs/2506.15702) self-corrective distillation, which enables training on small datasets with severely mitigated catastrophic forgetting.
2424
- Add tree decoding support for Megatron Eagle models.
@@ -55,8 +55,8 @@ Model Optimizer Changelog (Linux)
5555

5656
- NeMo and Megatron-LM distributed checkpoint (``torch-dist``) stored with legacy version can no longer be loaded. The remedy is to load the legacy distributed checkpoint with 0.29 and store a ``torch`` checkpoint and resume with 0.31 to convert to a new format. The following changes only apply to storing and resuming distributed checkpoint.
5757
- ``quantizer_state`` of :class:`TensorQuantizer <modelopt.torch.quantization.nn.modules.TensorQuantizer>` is now stored in ``extra_state`` of :class:`QuantModule <modelopt.torch.quantization.nn.module.QuantModule>` where it used to be stored in the sharded ``modelopt_state``.
58-
- The dtype and shape of ``amax`` and ``pre_quant_scale`` stored in the distributed checkpoint are now retored. Some dtype and shape are previously changed to make all decoder layers to have homogeneous structure in the checkpoint.
59-
- Togather with megatron.core-0.13, quantized model will store and resume distributed checkpoint in a heterogenous format.
58+
- The dtype and shape of ``amax`` and ``pre_quant_scale`` stored in the distributed checkpoint are now restored. Some dtype and shape are previously changed to make all decoder layers to have homogeneous structure in the checkpoint.
59+
- Together with megatron.core-0.13, quantized model will store and resume distributed checkpoint in a heterogenous format.
6060
- auto_quantize API now accepts a list of quantization config dicts as the list of quantization choices.
6161
- This API previously accepts a list of strings of quantization format names. It was therefore limited to only pre-defined quantization formats unless through some hacks.
6262
- With this change, now user can easily use their own custom quantization formats for auto_quantize.
@@ -146,7 +146,7 @@ Model Optimizer Changelog (Linux)
146146
**New Features**
147147

148148
- Support fast hadamard transform in :class:`TensorQuantizer <modelopt.torch.quantization.nn.modules.TensorQuantizer>`.
149-
It can be used for rotation based quantization methods, e.g. QuaRot. Users need to install the package `fast_hadamard_transfrom <https://github.com/Dao-AILab/fast-hadamard-transform>`_ to use this feature.
149+
It can be used for rotation based quantization methods, e.g. QuaRot. Users need to install the package `fast_hadamard_transform <https://github.com/Dao-AILab/fast-hadamard-transform>`_ to use this feature.
150150
- Add affine quantization support for the KV cache, resolving the low accuracy issue in models such as Qwen2.5 and Phi-3/3.5.
151151
- Add FSDP2 support. FSDP2 can now be used for QAT.
152152
- Add `LiveCodeBench <https://livecodebench.github.io/>`_ and `Simple Evals <https://github.com/openai/simple-evals>`_ to the ``llm_eval`` examples.

CONTRIBUTING.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -105,13 +105,13 @@ git push origin <branch> --force-with-lease
105105

106106
This will append the following to your commit message:
107107

108-
```
108+
```text
109109
Signed-off-by: Your Name <[email protected]>
110110
```
111111

112112
- Full text of the Developer Certificate of Origin (DCO):
113113

114-
```
114+
```text
115115
Developer Certificate of Origin
116116
Version 1.1
117117

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -123,7 +123,7 @@ Visit our [installation guide](https://nvidia.github.io/TensorRT-Model-Optimizer
123123
Model Optimizer is now open source! We welcome any feedback, feature requests and PRs.
124124
Please read our [Contributing](./CONTRIBUTING.md) guidelines for details on how to contribute to this project.
125125

126-
### Top Contributers
126+
### Top Contributors
127127

128128
[![Contributors](https://contrib.rocks/image?repo=NVIDIA/TensorRT-Model-Optimizer)](https://github.com/NVIDIA/TensorRT-Model-Optimizer/graphs/contributors)
129129

docker/README.md

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
# ModelOpt Docker
2+
3+
This folder contains the Dockerfile for the ModelOpt docker image.
4+
5+
## Building the Docker Image
6+
7+
To build the docker image, run the following command from the root of the repository:
8+
9+
```bash
10+
bash docker/build.sh
11+
```
12+
13+
The docker image will be built and tagged as `docker.io/library/modelopt_examples:latest`.
14+
15+
> [!NOTE]
16+
> For ONNX PTQ, use the optimized docker image from [onnx_ptq Dockerfile](../examples/onnx_ptq/docker/) instead of this one.

0 commit comments

Comments
 (0)