Serving a Machine Learning Model with FastAPI and Streamlit
Use the package manager pip to install fast-style transfer models.
sh download_models.sh
docker build -t backend .
$ docker run -p 8080:8080 backend
INFO: Started server process [1]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
In the browser, navigate to http://localhost:8080/. You should see:
{
"message": "Welcome from the API"
}
kill the container once done
To test, from the project root, build the images and spin up both containers, then navigate to http://localhost:8501
$ docker-compose up -d --build