You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Sep 4, 2025. It is now read-only.
We support tracing vLLM workers using the ``torch.profiler`` module. You can enable tracing by setting the ``VLLM_TORCH_PROFILER_DIR`` environment variable to the directory where you want to save the traces: ``VLLM_TORCH_PROFILER_DIR=/mnt/traces/``
5
+
6
+
The OpenAI server also needs to be started with the ``VLLM_TORCH_PROFILER_DIR`` environment variable set.
7
+
8
+
When using ``benchmarks/benchmark_serving.py``, you can enable profiling by passing the ``--profile`` flag.
9
+
10
+
.. warning::
11
+
12
+
Only enable profiling in a development environment.
13
+
14
+
15
+
Traces can be visualized using https://ui.perfetto.dev/.
16
+
17
+
.. tip::
18
+
19
+
Only send a few requests through vLLM when profiling, as the traces can get quite large. Also, no need to untar the traces, they can be viewed directly.
0 commit comments