Skip to content

bentoml/XGBoostDemo

Repository files navigation

XGBoost Demo with BentoML

This is a BentoML example project, which demonstrates how to serve and deploy an XGBoost model with BentoML. It provides various production-ready features including batching, monitoring, data validation, and a web interface.

See here for a full list of BentoML example projects.

Install dependencies

Clone the repository.

git clone https://github.com/bentoml/XGBoostDemo.git
cd XGBoostDemo

Install BentoML and the required dependencies for this project.

pip install bentoml xgboost scikit-learn gradio

While the example uses scikit-learn for demo purposes, both XGBoost and BentoML support a wide variety of frameworks, such as PyTorch and TensorFlow.

Train and save a model

Save the XGBoost model to the BentoML Model Store:

python train.py

Verify that the model has been successfully saved:

$ bentoml models list

Tag                                Module                   Size        Creation Time
iris_bst:2a2vn3d4twd4rf6f          bentoml.xgboost          5.94 KiB    2025-08-19 01:42:49

Run BentoML Service

In this project, we provide multiple BentoML Services for the XGBoost model for different use cases:

  • service.py: A basic BentoML Service that serves the XGBoost model
  • service_batching.py: Enables adaptive batching to efficiently handle concurrent requests
  • service_io_validation.py: Enforces input data validation
  • service_monitoring.py: Implements logging for predictions
  • gradio/service.py: Integrates Gradio to use a web interface for model interaction

For larger teams collaborating on multiple models and projects, you can use the following examples to standardize ML service development.

  • multiple_models/service.py: Serves multiple models in a single Service with A/B testing endpoints (/v1/predict, /v2/predict)
  • standardization/: Uses the shared components in common.py and enforces environment dependencies and API specifications across multiple projects

For more information about them, see the blog post Building ML Pipelines with MLflow and BentoML.

Let's try the Service that integrates Gradio. Run bentoml serve to start it.

cd gradio
bentoml serve service.py:IrisClassifier

The server is now active at http://localhost:3000. You can interact with it using the Gradio UI at /ui.

gradio_ui

Alternatively, call the inference endpoint with CURL or the BentoML Python client.

CURL
curl -X 'POST' \
  'http://localhost:3000/predict' \
  -H 'accept: application/json' \
  -H 'Content-Type: application/json' \
  -d '{
  "input_data": [
    [
      0.1,
      0.4,
      0.2,
      1
    ]
  ]
}'
Python client
import bentoml

with bentoml.SyncHTTPClient("http://localhost:3000") as client:
    res = client.predict([[6.5, 3.0, 5.8, 2.2]])[0]
    print("response:", res)

Try this script for load testing:

python benchmark_client.py

Deploy to BentoCloud

After the Service is ready, you can deploy the application to BentoCloud for better management and scalability. Sign up if you haven't got a BentoCloud account.

Make sure you have logged in to BentoCloud.

bentoml cloud login

Deploy your target Service from the project directory.

# Replace it with your desired Service, like service_batching.py
bentoml deploy service.py:IrisClassifier

Once the application is up and running, you can access it via the exposed URL.

Note: For custom deployment in your own infrastructure, use BentoML to generate an OCI-compliant image.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages