- Build a serverless ML application in AWS that utilises a CI/CD pipeline.
- CircleCI - Test and build.
- Deploying custom Scikit Learn model inside Docker container.
- Setup and manage notebook environment using sagemaker and docker.
- Generate and label data.
- Run Terraform.
- Fetch Model Artefact
- Load Training Data
- Set up and manage inference clusters.
- Provision public endpoint with API gateway.
- Should the api be broken down into two separate endpoints. One for training and one for inference or should they be put together?