This is a BentoML example project, which demonstrates how to serve and deploy an XGBoost model with BentoML. It provides various production-ready features including batching, monitoring, data validation, and a web interface.
See here for a full list of BentoML example projects.
Clone the repository.
git clone https://github.com/bentoml/XGBoostDemo.git
cd XGBoostDemoInstall BentoML and the required dependencies for this project.
pip install bentoml xgboost scikit-learn gradioWhile the example uses scikit-learn for demo purposes, both XGBoost and BentoML support a wide variety of frameworks, such as PyTorch and TensorFlow.
Save the XGBoost model to the BentoML Model Store:
python train.pyVerify that the model has been successfully saved:
$ bentoml models list
Tag Module Size Creation Time
iris_bst:2a2vn3d4twd4rf6f bentoml.xgboost 5.94 KiB 2025-08-19 01:42:49In this project, we provide multiple BentoML Services for the XGBoost model for different use cases:
service.py: A basic BentoML Service that serves the XGBoost modelservice_batching.py: Enables adaptive batching to efficiently handle concurrent requestsservice_io_validation.py: Enforces input data validationservice_monitoring.py: Implements logging for predictionsgradio/service.py: Integrates Gradio to use a web interface for model interaction
For larger teams collaborating on multiple models and projects, you can use the following examples to standardize ML service development.
multiple_models/service.py: Serves multiple models in a single Service with A/B testing endpoints (/v1/predict,/v2/predict)standardization/: Uses the shared components incommon.pyand enforces environment dependencies and API specifications across multiple projects
For more information about them, see the blog post Building ML Pipelines with MLflow and BentoML.
Let's try the Service that integrates Gradio. Run bentoml serve to start it.
cd gradio
bentoml serve service.py:IrisClassifierThe server is now active at http://localhost:3000. You can interact with it using the Gradio UI at /ui.
Alternatively, call the inference endpoint with CURL or the BentoML Python client.
CURL
curl -X 'POST' \
'http://localhost:3000/predict' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"input_data": [
[
0.1,
0.4,
0.2,
1
]
]
}'Python client
import bentoml
with bentoml.SyncHTTPClient("http://localhost:3000") as client:
res = client.predict([[6.5, 3.0, 5.8, 2.2]])[0]
print("response:", res)Try this script for load testing:
python benchmark_client.pyAfter the Service is ready, you can deploy the application to BentoCloud for better management and scalability. Sign up if you haven't got a BentoCloud account.
Make sure you have logged in to BentoCloud.
bentoml cloud loginDeploy your target Service from the project directory.
# Replace it with your desired Service, like service_batching.py
bentoml deploy service.py:IrisClassifierOnce the application is up and running, you can access it via the exposed URL.
Note: For custom deployment in your own infrastructure, use BentoML to generate an OCI-compliant image.
