Skip to content

Commit 91306f6

Browse files
committed
updated the readme
1 parent b6c24bf commit 91306f6

File tree

1 file changed

+2
-0
lines changed

1 file changed

+2
-0
lines changed

reports/README.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -579,6 +579,8 @@ We also used mkdocs for setting up documentation of our project. The documentati
579579
>
580580
> Answer:
581581
582+
![this figure](figures/our_overview.png)
583+
582584
The starting point of the diagram is our local setup, where we integrated the model using PyTorch Lightning, configuration files with Hydra, and scripts for training, evaluation, and data processing. We set up the DVC,where we pull the raw data zip file from the cloud, process them and push them into the flow. For training later, we do not use the DVC Push Pull, but instead we mount the bucket as explain in Module 21. When we commit the code and push it to GitHub, it automatically triggers workflows for unit tests, data statistics tests, code formatting checks, documentation generation, and the building of a training image. The diagram shows that the training image is saved in Google Cloud. We use Hydra to set hyperparameters and Weights and Biases to track and save the model during training. For the training process, we use the Compute Engine in Google Cloud, where we initialize Vertex AI using a Dockerfile. The Dockerfile retrieves the API key for Wandb from Google Cloud's Secret Manager. During training, data is loaded from a Google Cloud bucket, and predictions are saved back to the bucket. Furthermore, a user-friendly frontend is added to make predictions more accessible. This allows users to upload an image, which is sent to the model via a backend API. The model returns predictions that are then displayed for the user. These APIs are deployed by the developer using Cloud Run.
583585

584586
### Question 30

0 commit comments

Comments
 (0)