You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/blog/production-ready-object-detection-model-training-workflow-with-hpe-machine-learning-development-environment.md
+16-16Lines changed: 16 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -778,35 +778,35 @@ Run the following bash commmands in a terminal to to install the kubectl runtime
778
778
779
779
### Install Kserve
780
780
781
-
Run this bash script to install KServe onto our default Kubernetes Cluster, note this will install the following artifacts:
781
+
Run this bash script to install KServe onto our default Kubernetes cluster, note this will install the following artifacts:
Make sure to open a new terminal to continue the configuration.
798
798
799
-
### Create a Persistent Volume Claim for local model deployment
799
+
### Create a persistent volume claim for local model deployment
800
800
801
-
We will be creating a Persistent Volume Claim to host and access our Pytorch based Object Detection model locally. A persistent volume claim requires three k8s artifacts:
801
+
You will be creating a persistent volume claim to host and access the PyTorch-based object detection model locally. A persistent volume claim requires three k8s artifacts:
802
802
803
-
* A Persistent Volume
804
-
* A Persistent Volume Claim
805
-
* A K8S pod that connects the PVC to be accessed by other K8S resources
803
+
* A persistent volume
804
+
* A persistent volume claim
805
+
* A k8s pod that connects the PVC to be accessed by other k8s resources
806
806
807
-
### Creating a Persistent Volume and Persistent Volume Claim
807
+
### Creating a persistent volume and persistent volume claim
808
808
809
-
Below is the yaml definition that defines the Persistent Volume (PV) and a Persistent Volume Claim (PVC). We already created a file that defines this PV in `k8s_files/pv-and-pvc.yaml`
809
+
Below is the yaml definition that defines the Persistent Volume (PV) and a PersistentVolumeClaim (PVC). We already created a file that defines this PV in `k8s_files/pv-and-pvc.yaml`
810
810
811
811
```yaml
812
812
apiVersion: v1
@@ -839,9 +839,9 @@ spec:
839
839
840
840
To create the PV and PVC, run the command: `kubectl apply -f k8s_files/pv-and-pvc.yaml`
841
841
842
-
### Create K8s Pod to access PVC
842
+
### Create k8s pod to access PVC
843
843
844
-
Below is the yaml definition that defines the K8s Pod that mounts the Persistent Volume Claim (PVC). We already created a file that defines this PV in `k8s_files/model-store-pod.yaml`
844
+
Below is the yaml definition that defines the k8s Pod that mounts the PersistentVolumeClaim (PVC). We already created a file that defines this PV in `k8s_files/model-store-pod.yaml`
845
845
846
846
```yaml
847
847
apiVersion: v1
@@ -875,9 +875,9 @@ Here we will complete some preparation steps to deploy a trained custom FasterRC
### Stripping the Checkpoint of the Optimizer State Dict
878
+
### Stripping the checkpoint of the optimizer state dictionary
879
879
880
-
Checkpoints created from a Determined Experiment will save both the model parameters and the optimizer parameters. We will need to strip the checkpoint of all parameters except the model parameters for inference. Run the bash command to generate `train_model_stripped.pth`:
880
+
Checkpoints created from a Determined experiment will save both the model parameters and the optimizer parameters. You will need to strip the checkpoint of all parameters except the model parameters for inference. Run the bash command to generate `train_model_stripped.pth`:
Run the below command to export the Pytorch Checkpoint into a .mar file that is required for torchserve inference. Our Kserve InferenceService will automatically deploy a Pod with a docker image that support TorchServe inferencing.
891
+
Run the below command to export the PyTorch checkpoint into a .mar file that is required for torchserve inference. The Kserve InferenceService will automatically deploy a Pod with a docker image that support TorchServe inferencing.
0 commit comments