You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -129,7 +129,7 @@ Finally, we export the fastai model to use for following sections of this tutori
129
129
learn.export("./fastai_unet.pkl")
130
130
```
131
131
132
-
For more details about the modeling process, refere to `notebook/01_U-net_Modelling.ipynb`[[link](notebook/01_U-net_Modelling.ipynb)].
132
+
For more details about the modeling process, refer to `notebook/01_U-net_Modelling.ipynb`[[link](notebook/01_U-net_Modelling.ipynb)].
133
133
134
134
## PyTorch Transfer Modeling from FastAI
135
135
@@ -351,15 +351,15 @@ Here we can see the difference: in FastAI model `fastai_unet.pkl`, it packages a
351
351
352
352
**Note**: in `image_tfm` make sure the image size and normalization statistics are consistent with the training step. In our example here, the size is `96x128` and normalization is by default from [ImageNet](http://www.image-net.org/) as used in FastAI. If other transformations were applied during training, they may need to be added here as well.
353
353
354
-
For more details about the PyTorch weights transferring process, please refere to `notebook/02_Inference_in_pytorch.ipynb`[[link](notebook/02_Inference_in_pytorch.ipynb)].
354
+
For more details about the PyTorch weights transferring process, please refer to `notebook/02_Inference_in_pytorch.ipynb`[[link](notebook/02_Inference_in_pytorch.ipynb)].
355
355
356
356
## Deployment to TorchServe
357
357
358
358
In this section we deploy the PyTorch model to TorchServe. For installation, please refer to TorchServe [Github](https://github.com/pytorch/serve) Repository.
359
359
360
360
Overall, there are mainly 3 steps to use TorchServe:
361
361
362
-
1. Archive the model into `*mar`.
362
+
1. Archive the model into `*.mar`.
363
363
2. Start the `torchserve`.
364
364
3. Call the API and get the response.
365
365
@@ -515,7 +515,7 @@ user 0m0.280s
515
515
sys 0m0.039s
516
516
```
517
517
518
-
The first call would have longer latency due to model weights loading defined in `initialize`, but this will be mitigated from the second call onward. For more details about TorchServe setup and usage, please refere to `notebook/03_TorchServe.ipynb`[[link](notebook/03_TorchServe.ipynb)].
518
+
The first call would have longer latency due to model weights loading defined in `initialize`, but this will be mitigated from the second call onward. For more details about TorchServe setup and usage, please refer to `notebook/03_TorchServe.ipynb`[[link](notebook/03_TorchServe.ipynb)].
519
519
520
520
## Deployment to Amazon SageMaker Inference Endpoint
0 commit comments