Skip to content
This repository was archived by the owner on Aug 4, 2025. It is now read-only.
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ Execute the `./IEEE-CIS-Fraud-Detection.ipynb` notebook with the `train.sh` scri
```

This will create 3 variations of the model, you can view and manage those models via the
`bentoml models` CLI commnad:
`bentoml models` CLI command:

```bash
$ bentoml models list
Expand Down Expand Up @@ -215,7 +215,7 @@ docker run --gpus all --device /dev/nvidia0 \
BentoML makes it efficient to create ML service with multiple ML models, which is often used for combining
multiple fraud detection models and getting an aggregated result. With BentoML, users can choose to run
models sequentially or in parallel using the Python AsyncIO APIs along with Runners APIs. This makes
it possible create inference graphes or multi-stage inference pipeline all from Python APIs.
it possible create inference graphs or multi-stage inference pipeline all from Python APIs.

An example can be found under `inference_graph_demo` that runs all three models simutaneously and
aggregate their results:
Expand Down