Skip to content

Commit bd93ac1

Browse files
committed
fixes formatting
1 parent 7fbdd2c commit bd93ac1

File tree

1 file changed

+6
-1
lines changed

1 file changed

+6
-1
lines changed

solutions/security/ai/connect-to-vLLM.md

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -73,9 +73,12 @@ vllm/vllm-openai:v0.9.1 \
7373
--tensor-parallel-size 2
7474
```
7575

76-
.**Click to expand a full explanation of the command**
76+
77+
.Click to expand a full explanation of the command
7778
[%collapsible]
7879
=====
80+
81+
```
7982
`--gpus all`: Exposes all available GPUs to the container.
8083
`--name`: Defines a name for the container.
8184
`-v /root/.cache/huggingface:/root/.cache/huggingface`: Hugging Face cache directory (optional if used with `HUGGING_FACE_HUB_TOKEN`).
@@ -89,6 +92,8 @@ vllm/vllm-openai:v0.9.1 \
8992
`-enable-auto-tool-choice`: Enables automatic function calling.
9093
`--gpu-memory-utilization 0.90`: Limits max GPU used by vLLM (may vary depending on the machine resources available).
9194
`--tensor-parallel-size 2`: This value should match the number of available GPUs (in this case, 2). This is critical for performance on multi-GPU systems.
95+
```
96+
9297
=====
9398

9499
3. Verify the container's status by running the `docker ps -a` command. The output should show the value you specified for the `--name` parameter.

0 commit comments

Comments
 (0)