Skip to content

Commit 9077a03

Browse files
Update Github issue templates
Signed-off-by: Keval Morabia <[email protected]>
1 parent b660d39 commit 9077a03

File tree

4 files changed

+165
-94
lines changed

4 files changed

+165
-94
lines changed

.github/ISSUE_TEMPLATE/1_bug_report.md

Lines changed: 15 additions & 88 deletions
Original file line numberDiff line numberDiff line change
@@ -6,17 +6,32 @@ labels: bug
66
assignees: ''
77
---
88

9+
**Before submitting an issue, please make sure it hasn't been already addressed by searching through the [existing and past issues](https://github.com/NVIDIA/TensorRT-Model-Optimizer/issues?q=is%3Aissue).**
10+
911
## Describe the bug
1012
<!-- Description of what the bug is, its impact (blocker, should have, nice to have) and any stack traces or error messages. -->
1113

14+
- ?
15+
1216
### Steps/Code to reproduce bug
1317
<!-- Please list *minimal* steps or code snippet for us to be able to reproduce the bug. -->
1418
<!-- A helpful guide on on how to craft a minimal bug report http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports. -->
1519

20+
- ?
21+
1622
### Expected behavior
1723

24+
### Who can help?
25+
26+
<!-- To expedite the response to your issue, it would be helpful if you could identify the appropriate person(s) to tag using the @ symbol.
27+
If you are unsure about whom to tag, you can leave it blank, and we will make sure to involve the appropriate person. -->
28+
29+
- ?
30+
1831
## System information
1932

33+
<!-- Run this script to automatically collect system information: https://github.com/NVIDIA/TensorRT-Model-Optimizer/blob/main/.github/ISSUE_TEMPLATE/get_system_info.py -->
34+
2035
- Container used (if applicable): ?
2136
- OS (e.g., Ubuntu 22.04, CentOS 7, Windows 10): ? <!-- If Windows, please add the `windows` label to the issue. -->
2237
- CPU architecture (x86_64, aarch64): ?
@@ -33,91 +48,3 @@ assignees: ''
3348
- ONNXRuntime: ?
3449
- TensorRT: ?
3550
- Any other details that may help: ?
36-
37-
<details>
38-
<summary><b>Click to expand: Python script to automatically collect system information</b></summary>
39-
40-
```python
41-
import platform
42-
import re
43-
import subprocess
44-
45-
46-
def get_nvidia_gpu_info():
47-
try:
48-
nvidia_smi = (
49-
subprocess.check_output(
50-
"nvidia-smi --query-gpu=name,memory.total,count --format=csv,noheader,nounits",
51-
shell=True,
52-
)
53-
.decode("utf-8")
54-
.strip()
55-
.split("\n")
56-
)
57-
if len(nvidia_smi) > 0:
58-
gpu_name = nvidia_smi[0].split(",")[0].strip()
59-
gpu_memory = round(float(nvidia_smi[0].split(",")[1].strip()) / 1024, 1)
60-
gpu_count = len(nvidia_smi)
61-
return gpu_name, f"{gpu_memory} GB", gpu_count
62-
except Exception:
63-
return "?", "?", "?"
64-
65-
66-
def get_cuda_version():
67-
try:
68-
nvcc_output = subprocess.check_output("nvcc --version", shell=True).decode("utf-8")
69-
match = re.search(r"release (\d+\.\d+)", nvcc_output)
70-
if match:
71-
return match.group(1)
72-
except Exception:
73-
return "?"
74-
75-
76-
def get_package_version(package):
77-
try:
78-
return getattr(__import__(package), "__version__", "?")
79-
except Exception:
80-
return "?"
81-
82-
83-
# Get system info
84-
os_info = f"{platform.system()} {platform.release()}"
85-
if platform.system() == "Linux":
86-
try:
87-
os_info = (
88-
subprocess.check_output("cat /etc/os-release | grep PRETTY_NAME | cut -d= -f2", shell=True)
89-
.decode("utf-8")
90-
.strip()
91-
.strip('"')
92-
)
93-
except Exception:
94-
pass
95-
elif platform.system() == "Windows":
96-
print("Please add the `windows` label to the issue.")
97-
98-
cpu_arch = platform.machine()
99-
gpu_name, gpu_memory, gpu_count = get_nvidia_gpu_info()
100-
cuda_version = get_cuda_version()
101-
102-
# Print system information in the format required for the issue template
103-
print("=" * 70)
104-
print("- Container used (if applicable): " + "?")
105-
print("- OS (e.g., Ubuntu 22.04, CentOS 7, Windows 10): " + os_info)
106-
print("- CPU architecture (x86_64, aarch64): " + cpu_arch)
107-
print("- GPU name (e.g. H100, A100, L40S): " + gpu_name)
108-
print("- GPU memory size: " + gpu_memory)
109-
print("- Number of GPUs: " + str(gpu_count))
110-
print("- Library versions (if applicable):")
111-
print(" - Python: " + platform.python_version())
112-
print(" - ModelOpt version or commit hash: " + get_package_version("modelopt"))
113-
print(" - CUDA: " + cuda_version)
114-
print(" - PyTorch: " + get_package_version("torch"))
115-
print(" - Transformers: " + get_package_version("transformers"))
116-
print(" - TensorRT-LLM: " + get_package_version("tensorrt_llm"))
117-
print(" - ONNXRuntime: " + get_package_version("onnxruntime"))
118-
print(" - TensorRT: " + get_package_version("tensorrt"))
119-
print("- Any other details that may help: " + "?")
120-
print("=" * 70)
121-
```
122-
123-
</details>
Lines changed: 43 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,43 @@
1+
---
2+
name: Help needed
3+
about: Raise an issue here if you don't know how to use ModelOpt
4+
title: ''
5+
labels: question
6+
assignees: ''
7+
---
8+
9+
Make sure you already checked the [examples](https://github.com/NVIDIA/TensorRT-Model-Optimizer/tree/main/examples) and [documentation](https://nvidia.github.io/TensorRT-Model-Optimizer/) before submitting an issue.
10+
11+
## How would you like to use ModelOpt
12+
13+
<!-- Description of what you would like to do with ModelOpt. -->
14+
15+
- ?
16+
17+
### Who can help?
18+
19+
<!-- To expedite the response to your issue, it would be helpful if you could identify the appropriate person(s) to tag using the @ symbol.
20+
If you are unsure about whom to tag, you can leave it blank, and we will make sure to involve the appropriate person. -->
21+
22+
- ?
23+
24+
## System information
25+
26+
<!-- Run this script to automatically collect system information: https://github.com/NVIDIA/TensorRT-Model-Optimizer/blob/main/.github/ISSUE_TEMPLATE/get_system_info.py -->
27+
28+
- Container used (if applicable): ?
29+
- OS (e.g., Ubuntu 22.04, CentOS 7, Windows 10): ? <!-- If Windows, please add the `windows` label to the issue. -->
30+
- CPU architecture (x86_64, aarch64): ?
31+
- GPU name (e.g. H100, A100, L40S): ?
32+
- GPU memory size: ?
33+
- Number of GPUs: ?
34+
- Library versions (if applicable):
35+
- Python: ?
36+
- ModelOpt version or commit hash: ?
37+
- CUDA: ?
38+
- PyTorch: ?
39+
- Transformers: ?
40+
- TensorRT-LLM: ?
41+
- ONNXRuntime: ?
42+
- TensorRT: ?
43+
- Any other details that may help: ?
Lines changed: 101 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,101 @@
1+
# SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
2+
# SPDX-License-Identifier: Apache-2.0
3+
#
4+
# Licensed under the Apache License, Version 2.0 (the "License");
5+
# you may not use this file except in compliance with the License.
6+
# You may obtain a copy of the License at
7+
#
8+
# http://www.apache.org/licenses/LICENSE-2.0
9+
#
10+
# Unless required by applicable law or agreed to in writing, software
11+
# distributed under the License is distributed on an "AS IS" BASIS,
12+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13+
# See the License for the specific language governing permissions and
14+
# limitations under the License.
15+
16+
"""Python script to automatically collect system information for reporting Issues."""
17+
18+
import contextlib
19+
import platform
20+
import re
21+
import subprocess
22+
23+
24+
def get_nvidia_gpu_info():
25+
"""Get NVIDIA GPU Information."""
26+
try:
27+
nvidia_smi = (
28+
subprocess.check_output(
29+
"nvidia-smi --query-gpu=name,memory.total,count --format=csv,noheader,nounits",
30+
shell=True,
31+
)
32+
.decode("utf-8")
33+
.strip()
34+
.split("\n")
35+
)
36+
if len(nvidia_smi) > 0:
37+
gpu_name = nvidia_smi[0].split(",")[0].strip()
38+
gpu_memory = round(float(nvidia_smi[0].split(",")[1].strip()) / 1024, 1)
39+
gpu_count = len(nvidia_smi)
40+
return gpu_name, f"{gpu_memory} GB", gpu_count
41+
except Exception:
42+
return "?", "?", "?"
43+
44+
45+
def get_cuda_version():
46+
"""Get CUDA Version."""
47+
try:
48+
nvcc_output = subprocess.check_output("nvcc --version", shell=True).decode("utf-8")
49+
match = re.search(r"release (\d+\.\d+)", nvcc_output)
50+
if match:
51+
return match.group(1)
52+
except Exception:
53+
return "?"
54+
55+
56+
def get_package_version(package):
57+
"""Get package version."""
58+
try:
59+
return getattr(__import__(package), "__version__", "?")
60+
except Exception:
61+
return "?"
62+
63+
64+
# Get system info
65+
os_info = f"{platform.system()} {platform.release()}"
66+
if platform.system() == "Linux":
67+
with contextlib.suppress(Exception):
68+
os_info = (
69+
subprocess.check_output(
70+
"cat /etc/os-release | grep PRETTY_NAME | cut -d= -f2", shell=True
71+
)
72+
.decode("utf-8")
73+
.strip()
74+
.strip('"')
75+
)
76+
elif platform.system() == "Windows":
77+
print("Please add the `windows` label to the issue.")
78+
79+
cpu_arch = platform.machine()
80+
gpu_name, gpu_memory, gpu_count = get_nvidia_gpu_info()
81+
cuda_version = get_cuda_version()
82+
83+
# Print system information in the format required for the issue template
84+
print("=" * 70)
85+
print("- Container used (if applicable): " + "?")
86+
print("- OS (e.g., Ubuntu 22.04, CentOS 7, Windows 10): " + os_info)
87+
print("- CPU architecture (x86_64, aarch64): " + cpu_arch)
88+
print("- GPU name (e.g. H100, A100, L40S): " + gpu_name)
89+
print("- GPU memory size: " + gpu_memory)
90+
print("- Number of GPUs: " + str(gpu_count))
91+
print("- Library versions (if applicable):")
92+
print(" - Python: " + platform.python_version())
93+
print(" - ModelOpt version or commit hash: " + get_package_version("modelopt"))
94+
print(" - CUDA: " + cuda_version)
95+
print(" - PyTorch: " + get_package_version("torch"))
96+
print(" - Transformers: " + get_package_version("transformers"))
97+
print(" - TensorRT-LLM: " + get_package_version("tensorrt_llm"))
98+
print(" - ONNXRuntime: " + get_package_version("onnxruntime"))
99+
print(" - TensorRT: " + get_package_version("tensorrt"))
100+
print("- Any other details that may help: " + "?")
101+
print("=" * 70)

pyproject.toml

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ select = [
3535
"I", # isort
3636
"ISC", # flake8-implicit-str-concat
3737
"N", # pep8 naming
38-
"PERF", # Perflint
38+
"PERF", # Perflint
3939
"PGH", # pygrep-hooks
4040
"PIE", # flake8-pie
4141
"PLE", # pylint errors
@@ -44,7 +44,7 @@ select = [
4444
"RUF", # ruff
4545
"SIM", # flake8-simplify
4646
"TC", # flake8-type-checking
47-
"UP", # pyupgrade
47+
"UP", # pyupgrade
4848
"W", # pycodestyle warnings
4949
]
5050
extend-ignore = [
@@ -62,9 +62,9 @@ extend-ignore = [
6262
"__init__.py" = ["F401", "F403"]
6363
"examples/*" = ["D"]
6464
"tests/*" = ["B017", "D", "E402", "PT012"]
65-
"*/_[a-zA-Z]*" = ["D"] # Private packages (_abc/*.py) or modules (_xyz.py)
66-
"*.ipynb" = ["D", "E501"] # Ignore missing docstrings or line length for Jupyter notebooks
67-
"modelopt/torch/quantization/triton/*" = ["N803", "N806", "E731"] # triton style
65+
"*/_[a-zA-Z]*" = ["D"] # Private packages (_abc/*.py) or modules (_xyz.py)
66+
"*.ipynb" = ["D", "E501"] # Ignore missing docstrings or line length for Jupyter notebooks
67+
"modelopt/torch/quantization/triton/*" = ["N803", "N806", "E731"] # triton style
6868

6969

7070
[tool.ruff.lint.pycodestyle]
@@ -154,7 +154,7 @@ exclude_lines = [
154154

155155

156156
[tool.bandit]
157-
exclude_dirs = ["examples/", "tests/"]
157+
exclude_dirs = [".github/", "examples/", "tests/"]
158158
# Do not change `skips`. It should be consistent with NVIDIA's Wheel-CI-CD bandit.yml config.
159159
# Use of `# nosec BXXX` requires special approval
160160
skips = [

0 commit comments

Comments
 (0)