diff --git a/README.md b/README.md index 468fb87..f466bc7 100644 --- a/README.md +++ b/README.md @@ -44,7 +44,7 @@ Execute the `./IEEE-CIS-Fraud-Detection.ipynb` notebook with the `train.sh` scri ``` This will create 3 variations of the model, you can view and manage those models via the -`bentoml models` CLI commnad: +`bentoml models` CLI command: ```bash $ bentoml models list @@ -215,7 +215,7 @@ docker run --gpus all --device /dev/nvidia0 \ BentoML makes it efficient to create ML service with multiple ML models, which is often used for combining multiple fraud detection models and getting an aggregated result. With BentoML, users can choose to run models sequentially or in parallel using the Python AsyncIO APIs along with Runners APIs. This makes -it possible create inference graphes or multi-stage inference pipeline all from Python APIs. +it possible create inference graphs or multi-stage inference pipeline all from Python APIs. An example can be found under `inference_graph_demo` that runs all three models simutaneously and aggregate their results: