You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: .github/PULL_REQUEST_TEMPLATE.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,10 +12,10 @@ PLEASE FILL IN THE PR DESCRIPTION HERE ENSURING ALL CHECKLIST ITEMS (AT THE BOTT
12
12
<summary> Essential Elements of an Effective PR Description Checklist </summary>
13
13
14
14
-[ ] The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
15
-
-[ ] The test plan. Please providing the test scripts & test commands. Please state the reasons if your codes don't require additional test scripts. For test file guidelines, please check the [test style doc](https://docs.vllm.ai/projects/vllm-omni/en/latest/contributing/ci/tests_style/)
16
-
-[ ] The test results. Please pasting the results comparison before and after, or e2e results.
15
+
-[ ] The test plan. Please provide the test scripts & test commands. Please state the reasons if your codes don't require additional test scripts. For test file guidelines, please check the [test style doc](https://docs.vllm.ai/projects/vllm-omni/en/latest/contributing/ci/tests_style/)
16
+
-[ ] The test results. Please paste the results comparison before and after, or the e2e results.
17
17
-[ ] (Optional) The necessary documentation update, such as updating `supported_models.md` and `examples` for a new model. **Please run `mkdocs serve` to sync the documentation editions to `./docs`.**
18
-
-[ ] (Optional) Release notes update. If your change is userfacing, please update the release notes draft.
18
+
-[ ] (Optional) Release notes update. If your change is user-facing, please update the release notes draft.
19
19
</details>
20
20
21
21
**BEFORE SUBMITTING, PLEASE READ <https://github.com/vllm-project/vllm-omni/blob/main/CONTRIBUTING.md>** (anything written below this line will be removed by GitHub Actions)
As of now, asynchronous (online) profiling is not fully supported in vLLM-Omni. While start_profile() and stop_profile() methods exist, they are only reliable in offline inference scripts (e.g., the provided end2end.py examples). Do not use them in server-mode or streaming scenarios—traces may be incomplete or fail to flush.
136
-
137
134
### 4. Analyzing Omni Traces
138
135
139
136
Output files are saved to your configured ```VLLM_TORCH_PROFILER_DIR```.
0 commit comments