Skip to content

Commit 6f0afff

Browse files
authored
Update to latest version (#32)
1 parent 65821ed commit 6f0afff

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

README.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
<!--
2-
# Copyright 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
2+
# Copyright 2023-2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
33
#
44
# Redistribution and use in source and binary forms, with or without
55
# modification, are permitted provided that the following conditions
@@ -30,8 +30,8 @@
3030

3131
**LATEST RELEASE: You are currently on the main branch which tracks
3232
under-development progress towards the next release. The current release branch
33-
is [r23.12](https://github.com/triton-inference-server/vllm_backend/tree/r23.12)
34-
and which corresponds to the 23.12 container release on
33+
is [r24.01](https://github.com/triton-inference-server/vllm_backend/tree/r24.01)
34+
and which corresponds to the 24.01 container release on
3535
[NVIDIA GPU Cloud (NGC)](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tritonserver).**
3636

3737
# vLLM Backend
@@ -96,9 +96,9 @@ A sample command to build a Triton Server container with all options enabled is
9696
--endpoint=grpc
9797
--endpoint=sagemaker
9898
--endpoint=vertex-ai
99-
--upstream-container-version=23.12
100-
--backend=python:r23.12
101-
--backend=vllm:r23.12
99+
--upstream-container-version=24.01
100+
--backend=python:r24.01
101+
--backend=vllm:r24.01
102102
```
103103

104104
### Option 3. Add the vLLM Backend to the Default Triton Container

0 commit comments

Comments
 (0)