Skip to content

Commit 3c48b94

Browse files
Add Issue and PR templates
1 parent 7eecd11 commit 3c48b94

File tree

3 files changed

+176
-0
lines changed

3 files changed

+176
-0
lines changed
Lines changed: 126 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,126 @@
1+
---
2+
name: Bug report
3+
about: Submit a bug report to help us improve ModelOpt
4+
title: ''
5+
labels: bug
6+
assignees: ''
7+
---
8+
9+
## Describe the bug
10+
<!-- Description of what the bug is, its impact (blocker, should have, nice to have) and any stack traces or error messages. -->
11+
12+
13+
### Steps/Code to reproduce bug
14+
<!-- Please list *minimal* steps or code snippet for us to be able to reproduce the bug. -->
15+
<!-- A helpful guide on on how to craft a minimal bug report http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports. -->
16+
17+
18+
### Expected behavior
19+
20+
21+
## System information
22+
23+
24+
<details>
25+
<summary><b>Click to expand: Python script to automatically collect system information</b></summary>
26+
27+
```python
28+
import platform
29+
import re
30+
import subprocess
31+
32+
33+
def get_nvidia_gpu_info():
34+
try:
35+
nvidia_smi = (
36+
subprocess.check_output(
37+
"nvidia-smi --query-gpu=name,memory.total,count --format=csv,noheader,nounits",
38+
shell=True,
39+
)
40+
.decode("utf-8")
41+
.strip()
42+
.split("\n")
43+
)
44+
if len(nvidia_smi) > 0:
45+
gpu_name = nvidia_smi[0].split(",")[0].strip()
46+
gpu_memory = round(float(nvidia_smi[0].split(",")[1].strip()) / 1024, 1)
47+
gpu_count = len(nvidia_smi)
48+
return gpu_name, f"{gpu_memory} GB", gpu_count
49+
except Exception:
50+
return "?", "?", "?"
51+
52+
53+
def get_cuda_version():
54+
try:
55+
nvcc_output = subprocess.check_output("nvcc --version", shell=True).decode("utf-8")
56+
match = re.search(r"release (\d+\.\d+)", nvcc_output)
57+
if match:
58+
return match.group(1)
59+
except Exception:
60+
return "?"
61+
62+
63+
def get_package_version(package):
64+
try:
65+
return getattr(__import__(package), "__version__", "?")
66+
except Exception:
67+
return "?"
68+
69+
70+
# Get system info
71+
os_info = f"{platform.system()} {platform.release()}"
72+
if platform.system() == "Linux":
73+
try:
74+
os_info = (
75+
subprocess.check_output("cat /etc/os-release | grep PRETTY_NAME | cut -d= -f2", shell=True)
76+
.decode("utf-8")
77+
.strip()
78+
.strip('"')
79+
)
80+
except Exception:
81+
pass
82+
elif platform.system() == "Windows":
83+
print("Please add the `windows` label to the issue.")
84+
85+
cpu_arch = platform.machine()
86+
gpu_name, gpu_memory, gpu_count = get_nvidia_gpu_info()
87+
cuda_version = get_cuda_version()
88+
89+
# Print system information in the format required for the issue template
90+
print("=" * 70)
91+
print("- Container used (if applicable): " + "?")
92+
print("- OS (e.g., Ubuntu 22.04, CentOS 7, Windows 10): " + os_info)
93+
print("- CPU architecture (x86_64, aarch64): " + cpu_arch)
94+
print("- GPU name (e.g. H100, A100, L40S): " + gpu_name)
95+
print("- GPU memory size: " + gpu_memory)
96+
print("- Number of GPUs: " + str(gpu_count))
97+
print("- Library versions (if applicable):")
98+
print(" - Python: " + platform.python_version())
99+
print(" - ModelOpt version or commit hash: " + get_package_version("modelopt"))
100+
print(" - CUDA: " + cuda_version)
101+
print(" - PyTorch: " + get_package_version("torch"))
102+
print(" - Transformers: " + get_package_version("transformers"))
103+
print(" - TensorRT-LLM: " + get_package_version("tensorrt_llm"))
104+
print(" - ONNXRuntime: " + get_package_version("onnxruntime"))
105+
print(" - TensorRT: " + get_package_version("tensorrt"))
106+
print("=" * 70)
107+
```
108+
109+
</details>
110+
111+
- Container used (if applicable): ?
112+
- OS (e.g., Ubuntu 22.04, CentOS 7, Windows 10): ? <!-- If Windows, please add the `windows` label to the issue. -->
113+
- CPU architecture (x86_64, aarch64): ?
114+
- GPU name (e.g. H100, A100, L40S): ?
115+
- GPU memory size: ?
116+
- Number of GPUs: ?
117+
- Library versions (if applicable):
118+
- Python: ?
119+
- ModelOpt version or commit hash: ?
120+
- CUDA: ?
121+
- PyTorch: ?
122+
- Transformers: ?
123+
- TensorRT-LLM: ?
124+
- ONNXRuntime: ?
125+
- TensorRT: ?
126+
- Any other details that may help: ?
Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
---
2+
name: Feature request
3+
about: Suggest a new feature or model support for ModelOpt
4+
title: ''
5+
labels: feature request
6+
assignees: ''
7+
---
8+
9+
### Detailed description of the requested feature
10+
<!-- Description of the feature being requested. Also provide any relevant information on what the feature will be used for -->
11+
12+
13+
### Timeline
14+
<!-- What time-frame do you need this feature by and what is the impact (blocker, should have, nice to have) of not having the feature -->
15+
16+
17+
### Describe alternatives you've considered
18+
19+
20+
### Target Hardware/Use Case
21+
<!-- Target hardware/use case this feature will be used for -->

.github/PULL_REQUEST_TEMPLATE.md

Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
## What does this PR do?
2+
3+
**Type of change:** ? <!-- Use one of the following: Bug fix, new feature, new example, new tests, documentation. -->
4+
5+
**Overview:** ?
6+
7+
## Usage
8+
<!-- You can potentially add a usage example below. -->
9+
10+
```python
11+
# Add a code snippet demonstrating how to use this
12+
```
13+
14+
## Testing
15+
<!-- Mention how have you tested your change if applicable. -->
16+
17+
18+
## Before your PR is "*Ready for review*"
19+
<!-- If you haven't finished some of the above items you can still open `Draft` PR. -->
20+
21+
- **Make sure you read and follow [Contributor guidelines](https://github.com/NVIDIA/TensorRT-Model-Optimizer/blob/main/CONTRIBUTING.md)**
22+
- **Is this change backward compatible?**: Yes/No <!--- If No, explain why. -->
23+
- **Did you write any new necessary tests?**: Yes/No
24+
- **Did you add or update any necessary documentation?**: Yes/No
25+
- **Did you update [Changelog](https://github.com/NVIDIA/TensorRT-Model-Optimizer/blob/main/CHANGELOG.rst)?**: Yes/No <!--- Only for new features, API changes, critical bug fixes or bw breaking changes. -->
26+
27+
28+
## Additional Information
29+
<!-- E.g. related issue. -->

0 commit comments

Comments
 (0)