Skip to content

Commit fd68198

Browse files
committed
Update Blog “production-ready-object-detection-model-training-workflow-with-hpe-machine-learning-development-environment”
1 parent cf58770 commit fd68198

File tree

1 file changed

+16
-16
lines changed

1 file changed

+16
-16
lines changed

content/blog/production-ready-object-detection-model-training-workflow-with-hpe-machine-learning-development-environment.md

Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -778,35 +778,35 @@ Run the following bash commmands in a terminal to to install the kubectl runtime
778778

779779
### Install Kserve
780780

781-
Run this bash script to install KServe onto our default Kubernetes Cluster, note this will install the following artifacts:
781+
Run this bash script to install KServe onto our default Kubernetes cluster, note this will install the following artifacts:
782782

783783
* ISTIO_VERSION=1.15.2, KNATIVE_VERSION=knative-v1.9.0, KSERVE_VERSION=v0.9.0-rc0, CERT_MANAGER_VERSION=v1.3.0
784784
* `bash e2e_blogposts/ngc_blog/kserve_utils/bash_scripts/kserve_install.sh`
785785

786-
### Patch Domain for local connection to KServe cluster/environment
786+
### Patch domain for local connection to KServe cluster/environment
787787

788788
Run this command to patch your cluster when you want to connect to your cluster on the same machine:
789789

790790
`kubectl patch cm config-domain --patch '{"data":{"example.com":""}}' -n knative-serving`
791791

792-
### Run Port Forwarding to access KServe cluster
792+
### Run port forwarding to access KServe cluster
793793

794794
* `INGRESS_GATEWAY_SERVICE=$(kubectl get svc --namespace istio-system --selector="app=istio-ingressgateway" --output jsonpath='{.items[0].metadata.name}')`
795795
* `kubectl port-forward --namespace istio-system svc/${INGRESS_GATEWAY_SERVICE} 8080:80`
796796

797797
Make sure to open a new terminal to continue the configuration.
798798

799-
### Create a Persistent Volume Claim for local model deployment
799+
### Create a persistent volume claim for local model deployment
800800

801-
We will be creating a Persistent Volume Claim to host and access our Pytorch based Object Detection model locally. A persistent volume claim requires three k8s artifacts:
801+
You will be creating a persistent volume claim to host and access the PyTorch-based object detection model locally. A persistent volume claim requires three k8s artifacts:
802802

803-
* A Persistent Volume
804-
* A Persistent Volume Claim
805-
* A K8S pod that connects the PVC to be accessed by other K8S resources
803+
* A persistent volume
804+
* A persistent volume claim
805+
* A k8s pod that connects the PVC to be accessed by other k8s resources
806806

807-
### Creating a Persistent Volume and Persistent Volume Claim
807+
### Creating a persistent volume and persistent volume claim
808808

809-
Below is the yaml definition that defines the Persistent Volume (PV) and a Persistent Volume Claim (PVC). We already created a file that defines this PV in `k8s_files/pv-and-pvc.yaml`
809+
Below is the yaml definition that defines the Persistent Volume (PV) and a PersistentVolumeClaim (PVC). We already created a file that defines this PV in `k8s_files/pv-and-pvc.yaml`
810810

811811
```yaml
812812
apiVersion: v1
@@ -839,9 +839,9 @@ spec:
839839
840840
To create the PV and PVC, run the command: `kubectl apply -f k8s_files/pv-and-pvc.yaml`
841841

842-
### Create K8s Pod to access PVC
842+
### Create k8s pod to access PVC
843843

844-
Below is the yaml definition that defines the K8s Pod that mounts the Persistent Volume Claim (PVC). We already created a file that defines this PV in `k8s_files/model-store-pod.yaml`
844+
Below is the yaml definition that defines the k8s Pod that mounts the PersistentVolumeClaim (PVC). We already created a file that defines this PV in `k8s_files/model-store-pod.yaml`
845845

846846
```yaml
847847
apiVersion: v1
@@ -875,9 +875,9 @@ Here we will complete some preparation steps to deploy a trained custom FasterRC
875875

876876
* `wget -O kserve_utils/torchserve_utils/trained_model.pth https://determined-ai-xview-coco-dataset.s3.us-west-2.amazonaws.com/trained_model.pth`
877877

878-
### Stripping the Checkpoint of the Optimizer State Dict
878+
### Stripping the checkpoint of the optimizer state dictionary
879879

880-
Checkpoints created from a Determined Experiment will save both the model parameters and the optimizer parameters. We will need to strip the checkpoint of all parameters except the model parameters for inference. Run the bash command to generate `train_model_stripped.pth`:
880+
Checkpoints created from a Determined experiment will save both the model parameters and the optimizer parameters. You will need to strip the checkpoint of all parameters except the model parameters for inference. Run the bash command to generate `train_model_stripped.pth`:
881881

882882
Run the below command in a terminal:
883883

@@ -888,7 +888,7 @@ python kserve_utils/torchserve_utils/strip_checkpoint.py --ckpt-path kserve_util
888888

889889
### Run TorchServe Export to create .mar file
890890

891-
Run the below command to export the Pytorch Checkpoint into a .mar file that is required for torchserve inference. Our Kserve InferenceService will automatically deploy a Pod with a docker image that support TorchServe inferencing.
891+
Run the below command to export the PyTorch checkpoint into a .mar file that is required for torchserve inference. The Kserve InferenceService will automatically deploy a Pod with a docker image that support TorchServe inferencing.
892892

893893
```cwl
894894
torch-model-archiver --model-name xview-fasterrcnn \
@@ -905,7 +905,7 @@ After command finishes, run command to move file to our prepared `model-store/`
905905

906906
### Copy `config/` and `model-store/` folders to the K8S PVC Pod
907907

908-
This is the directory structure needed to prepare our custom Pytorch Model for KServe inferencing:
908+
This is the directory structure needed to prepare your custom PyTorch model for KServe inferencing:
909909

910910
```
911911
├── config

0 commit comments

Comments
 (0)