Skip to content

Commit f4bcdbf

Browse files
author
Baichuan Sun
committed
update: README change image to CC0 license
1 parent 2e2797f commit f4bcdbf

File tree

1 file changed

+22
-18
lines changed

1 file changed

+22
-18
lines changed

README.md

Lines changed: 22 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -151,20 +151,22 @@ torch.save(learn.model.state_dict(), "fasti_unet_weights.pth")
151151

152152
It's also straightforward to obtain the FastAI prediction on a sample image.
153153

154-
![sample_image](sample/Seq05VD_f00210.png)
154+
> 2013.04 - 'Streetview of a small neighborhood', with residential buildings, Amsterdam city photo by Fons Heijnsbroek, The Netherlands" by Amsterdam free photos & pictures of the Dutch city is marked under CC0 1.0. To view the terms, visit https://creativecommons.org/licenses/cc0/1.0/
155+
156+
![sample_image](sample/street_view_of_a_small_neighborhood.png)
155157

156158
```python
157-
image_path = "Seq05VD_f00210.png"
159+
image_path = "street_view_of_a_small_neighborhood.png"
158160
pred_fastai = learn.predict(image_path)
159161
pred_fastai[0].numpy()
160162
>>>
161-
array([[26, 26, 26, ..., 26, 26, 26],
162-
[26, 26, 26, ..., 26, 26, 26],
163-
[26, 26, 26, ..., 26, 26, 26],
163+
array([[26, 26, 26, ..., 4, 4, 4],
164+
[26, 26, 26, ..., 4, 4, 4],
165+
[26, 26, 26, ..., 4, 4, 4],
164166
...,
165-
[17, 17, 17, ..., 17, 17, 17],
166-
[17, 17, 17, ..., 17, 17, 17],
167-
[17, 17, 17, ..., 17, 17, 17]])
167+
[17, 17, 17, ..., 30, 30, 30],
168+
[17, 17, 17, ..., 30, 30, 30],
169+
[17, 17, 17, ..., 30, 30, 30]])
168170
```
169171

170172
### PyTorch Model from FastAI Source Code
@@ -311,9 +313,9 @@ from torchvision import transforms
311313
from PIL import Image
312314
import numpy as np
313315

314-
image_path = "Seq05VD_f00210.png"
316+
image_path = "street_view_of_a_small_neighborhood.png"
315317

316-
image = Image.open(image_path)
318+
image = Image.open(image_path).convert("RGB")
317319
image_tfm = transforms.Compose(
318320
[
319321
transforms.Resize((96, 128)),
@@ -332,13 +334,13 @@ raw_out.shape
332334
pred_res = raw_out[0].argmax(dim=0).numpy().astype(np.uint8)
333335
pred_res
334336
>>>
335-
array([[26, 26, 26, ..., 26, 26, 26],
336-
[26, 26, 26, ..., 26, 26, 26],
337-
[26, 26, 26, ..., 26, 26, 26],
337+
array([[26, 26, 26, ..., 4, 4, 4],
338+
[26, 26, 26, ..., 4, 4, 4],
339+
[26, 26, 26, ..., 4, 4, 4],
338340
...,
339-
[17, 17, 17, ..., 17, 17, 17],
340-
[17, 17, 17, ..., 17, 17, 17],
341-
[17, 17, 17, ..., 17, 17, 17]], dtype=uint8)
341+
[17, 17, 17, ..., 30, 30, 30],
342+
[17, 17, 17, ..., 30, 30, 30],
343+
[17, 17, 17, ..., 30, 30, 30]], dtype=uint8)
342344

343345
np.all(pred_fastai[0].numpy() == pred_res)
344346
>>> True
@@ -534,7 +536,7 @@ The details of these steps are described in `notebook/04_SageMaker.ipynb` [[link
534536
Read an sample image.
535537

536538
```python
537-
file_name = "Seq05VD_f00210.png"
539+
file_name = "street_view_of_a_small_neighborhood.png"
538540

539541
with open(file_name, 'rb') as f:
540542
payload = f.read()
@@ -557,7 +559,9 @@ pred_decoded_byte = base64.decodebytes(bytes(response["base64_prediction"], enco
557559
pred_decoded = np.reshape(
558560
np.frombuffer(pred_decoded_byte, dtype=np.uint8), (96, 128)
559561
)
560-
plt.imshow(pred_decoded);
562+
plt.imshow(pred_decoded)
563+
plt.axis("off")
564+
plt.show()
561565
```
562566

563567
![sample_prediction_response](sample/sample_pred_mask.png)

0 commit comments

Comments
 (0)