You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/blog/production-ready-object-detection-model-training-workflow-with-hpe-machine-learning-development-environment.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -83,10 +83,10 @@ This notebook walks you through each step required to train a model using co
83
83
84
84
Here, I will show you how to:
85
85
86
-
* Download the Xview Dataset
86
+
* Download the xView Dataset
87
87
* How to convert labels to coco format
88
88
* How to conduct the preprocessing step, **Tiling**: slicing large satellite imagery into chunks
89
-
* How to upload to s3 bucket to support distributed training
89
+
* How to upload to S3 bucket to support distributed training
90
90
91
91
Let's get started!
92
92
@@ -499,7 +499,7 @@ Let's get started!
499
499
500
500
## Pre-req: Run startup-hook.sh
501
501
502
-
This script will install some python dependencies, and install dataset labels needed when loading the Xview dataset:
502
+
This script will install some python dependencies, and install dataset labels needed when loading the xView dataset:
*Note that completing this tutorial requires you to upload your dataset from Step 2 into a publicly accessible S3 bucket. This will enable for a large scale distributed experiment to have access to the dataset without installing the dataset on device. View [Determined Documentation](<* https://docs.determined.ai/latest/training/load-model-data.html#streaming-from-object-storage>) and [AWS instructions](<* https://codingsight.com/upload-files-to-aws-s3-with-the-aws-cli/>) to learn how to upload your dataset to an S3 bucket. Review the*`S3Backend` class in `data.py`
514
+
*Note that completing this tutorial requires you to upload your dataset from Step 2 into a publicly accessible S3 bucket. This will enable for a large scale distributed experiment to have access to the dataset without installing the dataset on device. View [Determined Documentation](https://docs.determined.ai/latest/model-dev-guide/load-model-data.html) and [AWS instructions](https://codingsight.com/upload-files-to-aws-s3-with-the-aws-cli/) to learn how to upload your dataset to an S3 bucket. Review the*`S3Backend` class in `data.py`
515
515
516
516
When you define your S3 bucket and uploaded your dataset, make sure to change the `TARIN_DATA_DIR` in `build_training_data_loader` with the defined path in the S3 bucket.
517
517
@@ -907,7 +907,7 @@ After command finishes, run the command to move the file to our prepared `model-
907
907
908
908
This is the directory structure needed to prepare your custom PyTorch model for KServe inferencing:
909
909
910
-
```
910
+
```markdown
911
911
├── config
912
912
│ └── config.properties
913
913
├── model-store
@@ -917,7 +917,7 @@ This is the directory structure needed to prepare your custom PyTorch model for
0 commit comments