We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
VLLM_TARGET_DEVICE.lower()
1 parent ee2eb6e commit 5739371Copy full SHA for 5739371
vllm/envs.py
@@ -213,7 +213,7 @@ def get_vllm_port() -> Optional[int]:
213
# Target device of vLLM, supporting [cuda (by default),
214
# rocm, neuron, cpu]
215
"VLLM_TARGET_DEVICE":
216
- lambda: os.getenv("VLLM_TARGET_DEVICE", "cuda"),
+ lambda: os.getenv("VLLM_TARGET_DEVICE", "cuda").lower(),
217
218
# Maximum number of compilation jobs to run in parallel.
219
# By default this is the number of CPUs
0 commit comments