Skip to content

Commit 9a62c10

Browse files
authored
[skip ci] 📝 Update Jupyter Notebooks for Release v1.6.0 (#885)
- Update Jupyter Notebooks for the New Release - Fix issues with API changes e.g., device instead of ON_GPU flag.
1 parent 4a1940d commit 9a62c10

File tree

9 files changed

+5343
-5295
lines changed

9 files changed

+5343
-5295
lines changed

examples/05-patch-prediction.ipynb

Lines changed: 67 additions & 74 deletions
Large diffs are not rendered by default.

examples/06-semantic-segmentation.ipynb

Lines changed: 33 additions & 17 deletions
Large diffs are not rendered by default.

examples/07-advanced-modeling.ipynb

Lines changed: 88 additions & 44 deletions
Large diffs are not rendered by default.

examples/08-nucleus-instance-segmentation.ipynb

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@
158158
"source": [
159159
"### GPU or CPU runtime\n",
160160
"\n",
161-
"Processes in this notebook can be accelerated by using a GPU. Therefore, whether you are running this notebook on your system or Colab, you need to check and specify if you are using GPU or CPU hardware acceleration. In Colab, you need to make sure that the runtime type is set to GPU in the *\"Runtime→Change runtime type→Hardware accelerator\"*. If you are *not* using GPU, consider changing the `ON_GPU` flag to `Flase` value, otherwise, some errors will be raised when running the following cells.\n",
161+
"Processes in this notebook can be accelerated by using a GPU. Therefore, whether you are running this notebook on your system or Colab, you need to check and specify appropriate [device](https://pytorch.org/docs/stable/tensor_attributes.html#torch.device) e.g., \"cuda\" or \"cpu\" whether you are using GPU or CPU. In Colab, you need to make sure that the runtime type is set to GPU in the *\"Runtime→Change runtime type→Hardware accelerator\"*. If you are *not* using GPU, consider changing the `device` flag to `cpu` value, otherwise, some errors will be raised when running the following cells.\n",
162162
"\n"
163163
]
164164
},
@@ -173,8 +173,7 @@
173173
},
174174
"outputs": [],
175175
"source": [
176-
"# Should be changed to False if no cuda-enabled GPU is available.\n",
177-
"ON_GPU = True # Default is True."
176+
"device = \"cuda\" # Choose appropriate device"
178177
]
179178
},
180179
{
@@ -356,7 +355,7 @@
356355
" [img_file_name],\n",
357356
" save_dir=\"sample_tile_results/\",\n",
358357
" mode=\"tile\",\n",
359-
" on_gpu=ON_GPU,\n",
358+
" device=device,\n",
360359
" crash_on_exception=True,\n",
361360
")"
362361
]
@@ -386,7 +385,7 @@
386385
"\n",
387386
"- `mode`: the mode of inference which can be set to either `'tile'` or `'wsi'` for plain histology images or structured whole slides images, respectively.\n",
388387
"\n",
389-
"- `on_gpu`: can be `True` or `False` to dictate running the computations on GPU or CPU.\n",
388+
"- `device`: specify appropriate [device](https://pytorch.org/docs/stable/tensor_attributes.html#torch.device) e.g., \"cuda\", \"cuda:0\", \"mps\", \"cpu\" etc.\n",
390389
"\n",
391390
"- `crash_on_exception`: If set to `True`, the running loop will crash if there is an error during processing a WSI. Otherwise, the loop will move on to the next image (wsi) for processing. We suggest that you first make sure that the prediction is working as expected by testing it on a couple of inputs and then set this flag to `False` to process large cohorts of inputs.\n",
392391
"\n",
@@ -5615,13 +5614,13 @@
56155614
")\n",
56165615
"\n",
56175616
"# WSI prediction\n",
5618-
"# if ON_GPU=False, this part will take more than a couple of hours to process.\n",
5617+
"# if device=\"cpu\", this part will take more than a couple of hours to process.\n",
56195618
"wsi_output = inst_segmentor.predict(\n",
56205619
" [wsi_file_name],\n",
56215620
" masks=None,\n",
56225621
" save_dir=\"sample_wsi_results/\",\n",
56235622
" mode=\"wsi\",\n",
5624-
" on_gpu=ON_GPU,\n",
5623+
" device=device,\n",
56255624
" crash_on_exception=True,\n",
56265625
")"
56275626
]
@@ -5638,7 +5637,7 @@
56385637
"1. Setting `mode='wsi'` in the arguments to `predict` tells the program that the input are in WSI format.\n",
56395638
"1. `masks=None`: the `masks` argument to the `predict` function is handled in the same way as the imgs argument. It is a list of paths to the desired image masks. Patches from `imgs` are only processed if they are within a masked area of their corresponding `masks`. If not provided (`masks=None`), then a tissue mask is generated for whole-slide images or, for image tiles, the entire image is processed.\n",
56405639
"\n",
5641-
"The above code cell might take a while to process, especially if `ON_GPU=False`. The processing time mostly depends on the size of the input WSI.\n",
5640+
"The above code cell might take a while to process, especially if `device=\"cpu\"`. The processing time mostly depends on the size of the input WSI.\n",
56425641
"The output, `wsi_output`, of `predict` contains a list of paths to the input WSIs and the corresponding output results saved on disk. The results for nucleus instance segmentation in `'wsi'` mode are stored in a Python dictionary, in the same way as was done for `'tile'` mode.\n",
56435642
"We use `joblib` to load the outputs for this sample WSI and then inspect the results dictionary.\n",
56445643
"\n"
@@ -5788,11 +5787,12 @@
57885787
")\n",
57895788
"\n",
57905789
"color_dict = {\n",
5791-
" 0: (\"neoplastic epithelial\", (255, 0, 0)),\n",
5792-
" 1: (\"Inflammatory\", (255, 255, 0)),\n",
5793-
" 2: (\"Connective\", (0, 255, 0)),\n",
5794-
" 3: (\"Dead\", (0, 0, 0)),\n",
5795-
" 4: (\"non-neoplastic epithelial\", (0, 0, 255)),\n",
5790+
" 0: (\"background\", (255, 165, 0)),\n",
5791+
" 1: (\"neoplastic epithelial\", (255, 0, 0)),\n",
5792+
" 2: (\"Inflammatory\", (255, 255, 0)),\n",
5793+
" 3: (\"Connective\", (0, 255, 0)),\n",
5794+
" 4: (\"Dead\", (0, 0, 0)),\n",
5795+
" 5: (\"non-neoplastic epithelial\", (0, 0, 255)),\n",
57965796
"}\n",
57975797
"\n",
57985798
"# Create the overlay image\n",

examples/09-multi-task-segmentation.ipynb

Lines changed: 14 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -105,7 +105,7 @@
105105
},
106106
{
107107
"cell_type": "code",
108-
"execution_count": 2,
108+
"execution_count": null,
109109
"metadata": {
110110
"id": "UEIfjUTaJLPj",
111111
"outputId": "e4f383f2-306d-4afd-cd82-fec14a184941",
@@ -169,13 +169,13 @@
169169
"source": [
170170
"### GPU or CPU runtime\n",
171171
"\n",
172-
"Processes in this notebook can be accelerated by using a GPU. Therefore, whether you are running this notebook on your system or Colab, you need to check and specify whether you are using GPU or CPU hardware acceleration. In Colab, make sure that the runtime type is set to GPU, using the menu *Runtime→Change runtime type→Hardware accelerator*. If you are *not* using GPU, change the `ON_GPU` flag to `False`.\n",
172+
"Processes in this notebook can be accelerated by using a GPU. Therefore, whether you are running this notebook on your system or Colab, you need to check and specify appropriate [device](https://pytorch.org/docs/stable/tensor_attributes.html#torch.device) e.g., \"cuda\" or \"cpu\" whether you are using GPU or CPU. In Colab, you need to make sure that the runtime type is set to GPU in the *\"Runtime→Change runtime type→Hardware accelerator\"*. If you are *not* using GPU, consider changing the `device` flag to `cpu` value, otherwise, some errors will be raised when running the following cells.\n",
173173
"\n"
174174
]
175175
},
176176
{
177177
"cell_type": "code",
178-
"execution_count": 3,
178+
"execution_count": null,
179179
"metadata": {
180180
"id": "haTA_oQIY1Vy",
181181
"tags": [
@@ -184,8 +184,7 @@
184184
},
185185
"outputs": [],
186186
"source": [
187-
"# Should be changed to False if no cuda-enabled GPU is available.\n",
188-
"ON_GPU = True # Default is True."
187+
"device = \"cuda\" # Choose appropriate device"
189188
]
190189
},
191190
{
@@ -205,7 +204,7 @@
205204
},
206205
{
207206
"cell_type": "code",
208-
"execution_count": 4,
207+
"execution_count": null,
209208
"metadata": {
210209
"colab": {
211210
"base_uri": "https://localhost:8080/"
@@ -260,7 +259,7 @@
260259
},
261260
{
262261
"cell_type": "code",
263-
"execution_count": 6,
262+
"execution_count": null,
264263
"metadata": {
265264
"colab": {
266265
"base_uri": "https://localhost:8080/"
@@ -335,7 +334,7 @@
335334
},
336335
{
337336
"cell_type": "code",
338-
"execution_count": 7,
337+
"execution_count": null,
339338
"metadata": {
340339
"colab": {
341340
"base_uri": "https://localhost:8080/"
@@ -390,7 +389,7 @@
390389
" [img_file_name],\n",
391390
" save_dir=global_save_dir / \"sample_tile_results\",\n",
392391
" mode=\"tile\",\n",
393-
" on_gpu=ON_GPU,\n",
392+
" device=device,\n",
394393
" crash_on_exception=True,\n",
395394
")"
396395
]
@@ -418,7 +417,7 @@
418417
"\n",
419418
"- `mode`: the mode of inference which can be set to either `'tile'` or `'wsi'`, for plain histology images or structured whole slides images, respectively.\n",
420419
"\n",
421-
"- `on_gpu`: can be either `True` or `False` to dictate running the computations on GPU or CPU.\n",
420+
"- `device`: specify appropriate [device](https://pytorch.org/docs/stable/tensor_attributes.html#torch.device) e.g., \"cuda\", \"cuda:0\", \"mps\", \"cpu\" etc.\n",
422421
"\n",
423422
"- `crash_on_exception`: If set to `True`, the running loop will crash if there is an error during processing a WSI. Otherwise, the loop will move on to the next image (wsi) for processing. We suggest that you first make sure that prediction is working as expected by testing it on a couple of inputs and then set this flag to `False` to process large cohorts of inputs.\n",
424423
"\n",
@@ -430,7 +429,7 @@
430429
},
431430
{
432431
"cell_type": "code",
433-
"execution_count": 8,
432+
"execution_count": null,
434433
"metadata": {
435434
"colab": {
436435
"base_uri": "https://localhost:8080/",
@@ -546,7 +545,7 @@
546545
},
547546
{
548547
"cell_type": "code",
549-
"execution_count": 10,
548+
"execution_count": null,
550549
"metadata": {
551550
"colab": {
552551
"base_uri": "https://localhost:8080/"
@@ -595,7 +594,7 @@
595594
" masks=None,\n",
596595
" save_dir=global_save_dir / \"sample_wsi_results/\",\n",
597596
" mode=\"wsi\",\n",
598-
" on_gpu=ON_GPU,\n",
597+
" device=device,\n",
599598
" crash_on_exception=True,\n",
600599
")"
601600
]
@@ -612,13 +611,13 @@
612611
"1. Setting `mode='wsi'` in the `predict` function indicates that we are predicting region segmentations for inputs in the form of WSIs.\n",
613612
"1. `masks=None` in the `predict` function: the `masks` argument is a list of paths to the desired image masks. Patches from `imgs` are only processed if they are within a masked area of their corresponding `masks`. If not provided (`masks=None`), then either a tissue mask is automatically generated for whole-slide images or the entire image is processed as a collection of image tiles.\n",
614613
"\n",
615-
"The above cell might take a while to process, especially if you have set `ON_GPU=False`. The processing time depends on the size of the input WSI and the selected resolution. Here, we have not specified any values and we use the assumed input resolution (20x) of HoVer-Net+.\n",
614+
"The above cell might take a while to process, especially if you have set `device=\"cpu\"`. The processing time depends on the size of the input WSI and the selected resolution. Here, we have not specified any values and we use the assumed input resolution (20x) of HoVer-Net+.\n",
616615
"\n"
617616
]
618617
},
619618
{
620619
"cell_type": "code",
621-
"execution_count": 12,
620+
"execution_count": null,
622621
"metadata": {
623622
"colab": {
624623
"base_uri": "https://localhost:8080/",

0 commit comments

Comments
 (0)