-
Notifications
You must be signed in to change notification settings - Fork 32
feat: Add vLLM counter metrics access through Triton #53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
27 commits
Select commit
Hold shift + click to select a range
0686a7c
Add first supported metrics
yinggeh 21e2356
Update comments
yinggeh d95bb2c
Minor update
yinggeh 321faa0
Add metrics test
yinggeh 468539f
Fix copyright
yinggeh 8eba2f0
Remove unused metrics and update comments
yinggeh 6f97f6f
Minor update
yinggeh bf7669e
Minor updates
yinggeh e9d0dbb
Minor fix
yinggeh 7d0dc5b
Remove unused module
yinggeh 979dc02
Fix "metrics not supported error" when building with TRITON_ENABLE_ME…
yinggeh 3dd04c5
Fix "metrics not supported error" when building with TRITON_ENABLE_ME…
yinggeh 07f2575
Simply test
yinggeh 2135145
Completely turn off metrics
yinggeh 56aea05
Add vLLM disable_log_stats config test
yinggeh 0dadc8e
Test metrics are enabled by default if disable_log_stats is not set.
yinggeh 8d8fd2a
Update tests based on comments
yinggeh 4f2e217
Remove _log_gauge
yinggeh d22fd03
Resolve comments
yinggeh c8bdb6e
Merge branch 'main' of github.com:triton-inference-server/vllm_backen…
yinggeh 8280d26
Update
yinggeh 6fa7ae3
Change temp directory
yinggeh 89ca6f4
Disable metrics report by default. Controlled by parameter "REPORT_ME…
yinggeh 1158fee
Test server option set --allow-metrics=false
yinggeh a99d38b
Add docs
yinggeh de8f25b
Minor update
yinggeh b1333ce
Both args checking
yinggeh File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,248 @@ | ||
#!/bin/bash | ||
# Copyright 2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved. | ||
# | ||
# Redistribution and use in source and binary forms, with or without | ||
# modification, are permitted provided that the following conditions | ||
# are met: | ||
# * Redistributions of source code must retain the above copyright | ||
# notice, this list of conditions and the following disclaimer. | ||
# * Redistributions in binary form must reproduce the above copyright | ||
# notice, this list of conditions and the following disclaimer in the | ||
# documentation and/or other materials provided with the distribution. | ||
# * Neither the name of NVIDIA CORPORATION nor the names of its | ||
# contributors may be used to endorse or promote products derived | ||
# from this software without specific prior written permission. | ||
# | ||
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY | ||
# EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE | ||
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR | ||
# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR | ||
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, | ||
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, | ||
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR | ||
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY | ||
# OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT | ||
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE | ||
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. | ||
|
||
source ../../common/util.sh | ||
|
||
TRITON_DIR=${TRITON_DIR:="/opt/tritonserver"} | ||
SERVER=${TRITON_DIR}/bin/tritonserver | ||
BACKEND_DIR=${TRITON_DIR}/backends | ||
SERVER_ARGS="--model-repository=$(pwd)/models --backend-directory=${BACKEND_DIR} --model-control-mode=explicit --load-model=vllm_opt --log-verbose=1" | ||
SERVER_LOG="./vllm_metrics_server.log" | ||
CLIENT_LOG="./vllm_metrics_client.log" | ||
TEST_RESULT_FILE='test_results.txt' | ||
CLIENT_PY="./vllm_metrics_test.py" | ||
SAMPLE_MODELS_REPO="../../../samples/model_repository" | ||
EXPECTED_NUM_TESTS=1 | ||
|
||
# Helpers ======================================= | ||
function copy_model_repository { | ||
rm -rf models && mkdir -p models | ||
cp -r ${SAMPLE_MODELS_REPO}/vllm_model models/vllm_opt | ||
# `vllm_opt` model will be loaded on server start and stay loaded throughout | ||
# unittesting. To ensure that vllm's memory profiler will not error out | ||
# on `vllm_load_test` load, we reduce "gpu_memory_utilization" for `vllm_opt`, | ||
# so that at least 60% of GPU memory was available for other models. | ||
sed -i 's/"gpu_memory_utilization": 0.5/"gpu_memory_utilization": 0.4/' models/vllm_opt/1/model.json | ||
} | ||
|
||
RET=0 | ||
|
||
# Test disabling vLLM metrics reporting without parameter "REPORT_CUSTOM_METRICS" in config.pbtxt | ||
copy_model_repository | ||
run_server | ||
if [ "$SERVER_PID" == "0" ]; then | ||
cat $SERVER_LOG | ||
echo -e "\n***\n*** Failed to start $SERVER\n***" | ||
exit 1 | ||
fi | ||
|
||
set +e | ||
python3 $CLIENT_PY VLLMTritonMetricsTest.test_vllm_metrics_disabled -v > $CLIENT_LOG 2>&1 | ||
|
||
if [ $? -ne 0 ]; then | ||
cat $CLIENT_LOG | ||
echo -e "\n***\n*** Running $CLIENT_PY VLLMTritonMetricsTest.test_vllm_metrics_disabled FAILED. \n***" | ||
RET=1 | ||
else | ||
check_test_results $TEST_RESULT_FILE $EXPECTED_NUM_TESTS | ||
if [ $? -ne 0 ]; then | ||
cat $CLIENT_LOG | ||
echo -e "\n***\n*** Test Result Verification FAILED.\n***" | ||
RET=1 | ||
fi | ||
fi | ||
set -e | ||
|
||
kill $SERVER_PID | ||
wait $SERVER_PID | ||
|
||
# Test disabling vLLM metrics reporting with parameter "REPORT_CUSTOM_METRICS" set to "no" in config.pbtxt | ||
copy_model_repository | ||
echo -e " | ||
parameters: { | ||
key: \"REPORT_CUSTOM_METRICS\" | ||
value: { | ||
string_value:\"no\" | ||
} | ||
} | ||
" >> models/vllm_opt/config.pbtxt | ||
|
||
run_server | ||
if [ "$SERVER_PID" == "0" ]; then | ||
cat $SERVER_LOG | ||
echo -e "\n***\n*** Failed to start $SERVER\n***" | ||
exit 1 | ||
fi | ||
|
||
set +e | ||
python3 $CLIENT_PY VLLMTritonMetricsTest.test_vllm_metrics_disabled -v > $CLIENT_LOG 2>&1 | ||
|
||
if [ $? -ne 0 ]; then | ||
cat $CLIENT_LOG | ||
echo -e "\n***\n*** Running $CLIENT_PY VLLMTritonMetricsTest.test_vllm_metrics_disabled FAILED. \n***" | ||
RET=1 | ||
else | ||
check_test_results $TEST_RESULT_FILE $EXPECTED_NUM_TESTS | ||
if [ $? -ne 0 ]; then | ||
cat $CLIENT_LOG | ||
echo -e "\n***\n*** Test Result Verification FAILED.\n***" | ||
RET=1 | ||
fi | ||
fi | ||
set -e | ||
|
||
kill $SERVER_PID | ||
wait $SERVER_PID | ||
|
||
# Test vLLM metrics reporting with parameter "REPORT_CUSTOM_METRICS" set to "yes" in config.pbtxt | ||
copy_model_repository | ||
cp ${SAMPLE_MODELS_REPO}/vllm_model/config.pbtxt models/vllm_opt | ||
echo -e " | ||
parameters: { | ||
key: \"REPORT_CUSTOM_METRICS\" | ||
value: { | ||
string_value:\"yes\" | ||
} | ||
} | ||
" >> models/vllm_opt/config.pbtxt | ||
|
||
run_server | ||
if [ "$SERVER_PID" == "0" ]; then | ||
cat $SERVER_LOG | ||
echo -e "\n***\n*** Failed to start $SERVER\n***" | ||
exit 1 | ||
fi | ||
|
||
set +e | ||
python3 $CLIENT_PY VLLMTritonMetricsTest.test_vllm_metrics -v > $CLIENT_LOG 2>&1 | ||
|
||
if [ $? -ne 0 ]; then | ||
cat $CLIENT_LOG | ||
echo -e "\n***\n*** Running $CLIENT_PY VLLMTritonMetricsTest.test_vllm_metrics FAILED. \n***" | ||
RET=1 | ||
else | ||
check_test_results $TEST_RESULT_FILE $EXPECTED_NUM_TESTS | ||
if [ $? -ne 0 ]; then | ||
cat $CLIENT_LOG | ||
echo -e "\n***\n*** Test Result Verification FAILED.\n***" | ||
RET=1 | ||
fi | ||
fi | ||
set -e | ||
|
||
kill $SERVER_PID | ||
wait $SERVER_PID | ||
|
||
# Test enabling vLLM metrics reporting in config.pbtxt but disabling in model.json | ||
copy_model_repository | ||
jq '. += {"disable_log_stats" : true}' models/vllm_opt/1/model.json > "temp.json" | ||
mv temp.json models/vllm_opt/1/model.json | ||
echo -e " | ||
parameters: { | ||
key: \"REPORT_CUSTOM_METRICS\" | ||
value: { | ||
string_value:\"yes\" | ||
} | ||
} | ||
" >> models/vllm_opt/config.pbtxt | ||
|
||
run_server | ||
if [ "$SERVER_PID" == "0" ]; then | ||
cat $SERVER_LOG | ||
echo -e "\n***\n*** Failed to start $SERVER\n***" | ||
exit 1 | ||
fi | ||
|
||
set +e | ||
python3 $CLIENT_PY VLLMTritonMetricsTest.test_vllm_metrics_disabled -v > $CLIENT_LOG 2>&1 | ||
|
||
if [ $? -ne 0 ]; then | ||
cat $CLIENT_LOG | ||
echo -e "\n***\n*** Running $CLIENT_PY VLLMTritonMetricsTest.test_vllm_metrics_disabled FAILED. \n***" | ||
RET=1 | ||
else | ||
check_test_results $TEST_RESULT_FILE $EXPECTED_NUM_TESTS | ||
if [ $? -ne 0 ]; then | ||
cat $CLIENT_LOG | ||
echo -e "\n***\n*** Test Result Verification FAILED.\n***" | ||
RET=1 | ||
fi | ||
fi | ||
set -e | ||
|
||
kill $SERVER_PID | ||
wait $SERVER_PID | ||
|
||
# Test enabling vLLM metrics reporting in config.pbtxt while disabling in server option | ||
copy_model_repository | ||
echo -e " | ||
parameters: { | ||
key: \"REPORT_CUSTOM_METRICS\" | ||
value: { | ||
string_value:\"yes\" | ||
} | ||
} | ||
" >> models/vllm_opt/config.pbtxt | ||
SERVER_ARGS="${SERVER_ARGS} --allow-metrics=false" | ||
run_server | ||
if [ "$SERVER_PID" == "0" ]; then | ||
cat $SERVER_LOG | ||
echo -e "\n***\n*** Failed to start $SERVER\n***" | ||
exit 1 | ||
fi | ||
|
||
set +e | ||
python3 $CLIENT_PY VLLMTritonMetricsTest.test_vllm_metrics_refused -v > $CLIENT_LOG 2>&1 | ||
|
||
if [ $? -ne 0 ]; then | ||
cat $CLIENT_LOG | ||
echo -e "\n***\n*** Running $CLIENT_PY VLLMTritonMetricsTest.test_vllm_metrics_refused FAILED. \n***" | ||
RET=1 | ||
else | ||
check_test_results $TEST_RESULT_FILE $EXPECTED_NUM_TESTS | ||
if [ $? -ne 0 ]; then | ||
cat $CLIENT_LOG | ||
echo -e "\n***\n*** Test Result Verification FAILED.\n***" | ||
RET=1 | ||
fi | ||
fi | ||
set -e | ||
|
||
kill $SERVER_PID | ||
wait $SERVER_PID | ||
rm -rf "./models" "temp.json" | ||
|
||
if [ $RET -eq 1 ]; then | ||
cat $CLIENT_LOG | ||
cat $SERVER_LOG | ||
echo -e "\n***\n*** vLLM test FAILED. \n***" | ||
else | ||
echo -e "\n***\n*** vLLM test PASSED. \n***" | ||
fi | ||
|
||
collect_artifacts_from_subdir | ||
exit $RET |
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: not sure if it should be in caps
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be consistent with the only
parameters
example found in our codeThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's one example, here's lower case as well: https://github.com/triton-inference-server/server/blob/53200091b84f08a5e4921f5073137784570283e9/docs/user_guide/optimization.md#onnx-with-tensorrt-optimization-ort-trt
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am more inclined to upper case for boolean keys.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can use
key_value.upper()
before the comparison:There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd rather make either all parameters case-insensitive at the time config.pbtxt was loaded or all case-sensitive.