Skip to content

Commit d3fcbb0

Browse files
committed
Update Blog “production-ready-object-detection-model-training-workflow-with-hpe-machine-learning-development-environment”
1 parent 50aa1e9 commit d3fcbb0

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

content/blog/production-ready-object-detection-model-training-workflow-with-hpe-machine-learning-development-environment.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -83,10 +83,10 @@ This notebook walks you through each step required to train a model using co
8383

8484
Here, I will show you how to:
8585

86-
* Download the Xview Dataset
86+
* Download the xView Dataset
8787
* How to convert labels to coco format
8888
* How to conduct the preprocessing step, **Tiling**: slicing large satellite imagery into chunks
89-
* How to upload to s3 bucket to support distributed training
89+
* How to upload to S3 bucket to support distributed training
9090

9191
Let's get started!
9292

@@ -499,7 +499,7 @@ Let's get started!
499499

500500
## Pre-req: Run startup-hook.sh
501501

502-
This script will install some python dependencies, and install dataset labels needed when loading the Xview dataset:
502+
This script will install some python dependencies, and install dataset labels needed when loading the xView dataset:
503503

504504
```bash
505505
## Temporary disable for Grenoble Demo
@@ -511,7 +511,7 @@ mkdir /tmp/val_sliced_no_neg
511511
mv val_300_02.json /tmp/val_sliced_no_neg/val_300_02.json
512512
```
513513

514-
*Note that completing this tutorial requires you to upload your dataset from Step 2 into a publicly accessible S3 bucket. This will enable for a large scale distributed experiment to have access to the dataset without installing the dataset on device. View [Determined Documentation](<* https://docs.determined.ai/latest/training/load-model-data.html#streaming-from-object-storage>) and [AWS instructions](<* https://codingsight.com/upload-files-to-aws-s3-with-the-aws-cli/>) to learn how to upload your dataset to an S3 bucket. Review the* `S3Backend` class in `data.py`
514+
*Note that completing this tutorial requires you to upload your dataset from Step 2 into a publicly accessible S3 bucket. This will enable for a large scale distributed experiment to have access to the dataset without installing the dataset on device. View [Determined Documentation](https://docs.determined.ai/latest/model-dev-guide/load-model-data.html) and [AWS instructions](https://codingsight.com/upload-files-to-aws-s3-with-the-aws-cli/) to learn how to upload your dataset to an S3 bucket. Review the* `S3Backend` class in `data.py`
515515

516516
When you define your S3 bucket and uploaded your dataset, make sure to change the `TARIN_DATA_DIR` in `build_training_data_loader` with the defined path in the S3 bucket.
517517

@@ -907,7 +907,7 @@ After command finishes, run the command to move the file to our prepared `model-
907907

908908
This is the directory structure needed to prepare your custom PyTorch model for KServe inferencing:
909909

910-
```
910+
```markdown
911911
├── config
912912
│ └── config.properties
913913
├── model-store
@@ -917,7 +917,7 @@ This is the directory structure needed to prepare your custom PyTorch model for
917917

918918
#### What the config.properties file looks like
919919

920-
```
920+
```markdown
921921
inference_address=http://0.0.0.0:8085
922922
management_address=http://0.0.0.0:8085
923923
metrics_address=http://0.0.0.0:8082

0 commit comments

Comments
 (0)