Skip to content

Commit eff3c73

Browse files
author
Baichuan Sun
committed
fix: typos
1 parent 116228c commit eff3c73

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -129,7 +129,7 @@ Finally, we export the fastai model to use for following sections of this tutori
129129
learn.export("./fastai_unet.pkl")
130130
```
131131

132-
For more details about the modeling process, refere to `notebook/01_U-net_Modelling.ipynb` [[link](notebook/01_U-net_Modelling.ipynb)].
132+
For more details about the modeling process, refer to `notebook/01_U-net_Modelling.ipynb` [[link](notebook/01_U-net_Modelling.ipynb)].
133133

134134
## PyTorch Transfer Modeling from FastAI
135135

@@ -351,15 +351,15 @@ Here we can see the difference: in FastAI model `fastai_unet.pkl`, it packages a
351351

352352
**Note**: in `image_tfm` make sure the image size and normalization statistics are consistent with the training step. In our example here, the size is `96x128` and normalization is by default from [ImageNet](http://www.image-net.org/) as used in FastAI. If other transformations were applied during training, they may need to be added here as well.
353353

354-
For more details about the PyTorch weights transferring process, please refere to `notebook/02_Inference_in_pytorch.ipynb` [[link](notebook/02_Inference_in_pytorch.ipynb)].
354+
For more details about the PyTorch weights transferring process, please refer to `notebook/02_Inference_in_pytorch.ipynb` [[link](notebook/02_Inference_in_pytorch.ipynb)].
355355

356356
## Deployment to TorchServe
357357

358358
In this section we deploy the PyTorch model to TorchServe. For installation, please refer to TorchServe [Github](https://github.com/pytorch/serve) Repository.
359359

360360
Overall, there are mainly 3 steps to use TorchServe:
361361

362-
1. Archive the model into `*mar`.
362+
1. Archive the model into `*.mar`.
363363
2. Start the `torchserve`.
364364
3. Call the API and get the response.
365365

@@ -515,7 +515,7 @@ user 0m0.280s
515515
sys 0m0.039s
516516
```
517517

518-
The first call would have longer latency due to model weights loading defined in `initialize`, but this will be mitigated from the second call onward. For more details about TorchServe setup and usage, please refere to `notebook/03_TorchServe.ipynb` [[link](notebook/03_TorchServe.ipynb)].
518+
The first call would have longer latency due to model weights loading defined in `initialize`, but this will be mitigated from the second call onward. For more details about TorchServe setup and usage, please refer to `notebook/03_TorchServe.ipynb` [[link](notebook/03_TorchServe.ipynb)].
519519

520520
## Deployment to Amazon SageMaker Inference Endpoint
521521

0 commit comments

Comments
 (0)