Skip to content

Commit 71ef29f

Browse files
committed
Update Blog “production-ready-object-detection-model-training-workflow-with-hpe-machine-learning-development-environment”
1 parent c03df06 commit 71ef29f

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

content/blog/production-ready-object-detection-model-training-workflow-with-hpe-machine-learning-development-environment.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -954,9 +954,9 @@ model_snapshot={"name": "startup.cfg","modelCount": 1,"models": {"xview-fasterrc
954954
]
955955
```
956956

957-
Note that we have a `config/` folder that includes a config.properties. This defines A. We also have a `model-store/` directory that contains are exported models and a `properties.json` file. We need this file for B
957+
Note that there is a `config/` folder that includes a config.properties. This defines A. There is also a `model-store/` directory that contains are exported models and a `properties.json` file. You will need this file for B.
958958

959-
Now we will run several kubectl commands to copy over these folders into our Pod and into the PVC defined directory
959+
Now, run several kubectl commands to copy over these folders into your Pod and into the PVC defined directory.
960960

961961
* `kubectl cp kserve_utils/torchserve_utils/config/ model-store-pod:/pv/config/`
962962
* `kubectl cp kserve_utils/torchserve_utils/model-store/ model-store-pod:/pv/model-store/`
@@ -966,11 +966,11 @@ Run these commands to verify the contents have been copied over to the pod.
966966
* `kubectl exec --tty model-store-pod -- ls /pv/config`
967967
* `kubectl exec --tty model-store-pod -- ls /pv/model-store`
968968

969-
## Deploying model using a KServe InferenceService
969+
## Deploying a model using a KServe InferenceService
970970

971971
### Create Inference Service
972972

973-
Below is the yaml definition that defines the KServe InferenceService that deploys models stored in the PVC. We already created a file that defines this PV in `k8s_files/torch-kserve-pvc.yaml`
973+
Below is the yaml definition that defines the KServe InferenceService that deploys models stored in the PVC. A file that defines this PV has already been created in `k8s_files/torch-kserve-pvc.yaml`
974974

975975
```yaml
976976
apiVersion: "serving.kserve.io/v1beta1"
@@ -1006,15 +1006,15 @@ filename='kserve_utils/torchserve_utils/example_img.jpg'
10061006
im = Image.open(filename)
10071007
```
10081008

1009-
Here is the test image we will send to the Deployed Model
1009+
Here is the test image that will be sent to the deployed model.
10101010

10111011
```python
10121012
im
10131013
```
10141014

10151015
![png](/img/output_26_0.png)
10161016

1017-
Here we will encode the image into into base64 binary format
1017+
Now, encode the image into the base64 binary format.
10181018

10191019
```python
10201020
image = open(filename, 'rb') # open binary file in read mode

0 commit comments

Comments
 (0)