|
158 | 158 | "source": [ |
159 | 159 | "### GPU or CPU runtime\n", |
160 | 160 | "\n", |
161 | | - "Processes in this notebook can be accelerated by using a GPU. Therefore, whether you are running this notebook on your system or Colab, you need to check and specify if you are using GPU or CPU hardware acceleration. In Colab, you need to make sure that the runtime type is set to GPU in the *\"Runtime→Change runtime type→Hardware accelerator\"*. If you are *not* using GPU, consider changing the `ON_GPU` flag to `Flase` value, otherwise, some errors will be raised when running the following cells.\n", |
| 161 | + "Processes in this notebook can be accelerated by using a GPU. Therefore, whether you are running this notebook on your system or Colab, you need to check and specify appropriate [device](https://pytorch.org/docs/stable/tensor_attributes.html#torch.device) e.g., \"cuda\" or \"cpu\" whether you are using GPU or CPU. In Colab, you need to make sure that the runtime type is set to GPU in the *\"Runtime→Change runtime type→Hardware accelerator\"*. If you are *not* using GPU, consider changing the `device` flag to `cpu` value, otherwise, some errors will be raised when running the following cells.\n", |
162 | 162 | "\n" |
163 | 163 | ] |
164 | 164 | }, |
|
173 | 173 | }, |
174 | 174 | "outputs": [], |
175 | 175 | "source": [ |
176 | | - "# Should be changed to False if no cuda-enabled GPU is available.\n", |
177 | | - "ON_GPU = True # Default is True." |
| 176 | + "device = \"cuda\" # Choose appropriate device" |
178 | 177 | ] |
179 | 178 | }, |
180 | 179 | { |
|
356 | 355 | " [img_file_name],\n", |
357 | 356 | " save_dir=\"sample_tile_results/\",\n", |
358 | 357 | " mode=\"tile\",\n", |
359 | | - " on_gpu=ON_GPU,\n", |
| 358 | + " device=device,\n", |
360 | 359 | " crash_on_exception=True,\n", |
361 | 360 | ")" |
362 | 361 | ] |
|
386 | 385 | "\n", |
387 | 386 | "- `mode`: the mode of inference which can be set to either `'tile'` or `'wsi'` for plain histology images or structured whole slides images, respectively.\n", |
388 | 387 | "\n", |
389 | | - "- `on_gpu`: can be `True` or `False` to dictate running the computations on GPU or CPU.\n", |
| 388 | + "- `device`: specify appropriate [device](https://pytorch.org/docs/stable/tensor_attributes.html#torch.device) e.g., \"cuda\", \"cuda:0\", \"mps\", \"cpu\" etc.\n", |
390 | 389 | "\n", |
391 | 390 | "- `crash_on_exception`: If set to `True`, the running loop will crash if there is an error during processing a WSI. Otherwise, the loop will move on to the next image (wsi) for processing. We suggest that you first make sure that the prediction is working as expected by testing it on a couple of inputs and then set this flag to `False` to process large cohorts of inputs.\n", |
392 | 391 | "\n", |
|
5615 | 5614 | ")\n", |
5616 | 5615 | "\n", |
5617 | 5616 | "# WSI prediction\n", |
5618 | | - "# if ON_GPU=False, this part will take more than a couple of hours to process.\n", |
| 5617 | + "# if device=\"cpu\", this part will take more than a couple of hours to process.\n", |
5619 | 5618 | "wsi_output = inst_segmentor.predict(\n", |
5620 | 5619 | " [wsi_file_name],\n", |
5621 | 5620 | " masks=None,\n", |
5622 | 5621 | " save_dir=\"sample_wsi_results/\",\n", |
5623 | 5622 | " mode=\"wsi\",\n", |
5624 | | - " on_gpu=ON_GPU,\n", |
| 5623 | + " device=device,\n", |
5625 | 5624 | " crash_on_exception=True,\n", |
5626 | 5625 | ")" |
5627 | 5626 | ] |
|
5638 | 5637 | "1. Setting `mode='wsi'` in the arguments to `predict` tells the program that the input are in WSI format.\n", |
5639 | 5638 | "1. `masks=None`: the `masks` argument to the `predict` function is handled in the same way as the imgs argument. It is a list of paths to the desired image masks. Patches from `imgs` are only processed if they are within a masked area of their corresponding `masks`. If not provided (`masks=None`), then a tissue mask is generated for whole-slide images or, for image tiles, the entire image is processed.\n", |
5640 | 5639 | "\n", |
5641 | | - "The above code cell might take a while to process, especially if `ON_GPU=False`. The processing time mostly depends on the size of the input WSI.\n", |
| 5640 | + "The above code cell might take a while to process, especially if `device=\"cpu\"`. The processing time mostly depends on the size of the input WSI.\n", |
5642 | 5641 | "The output, `wsi_output`, of `predict` contains a list of paths to the input WSIs and the corresponding output results saved on disk. The results for nucleus instance segmentation in `'wsi'` mode are stored in a Python dictionary, in the same way as was done for `'tile'` mode.\n", |
5643 | 5642 | "We use `joblib` to load the outputs for this sample WSI and then inspect the results dictionary.\n", |
5644 | 5643 | "\n" |
|
5788 | 5787 | ")\n", |
5789 | 5788 | "\n", |
5790 | 5789 | "color_dict = {\n", |
5791 | | - " 0: (\"neoplastic epithelial\", (255, 0, 0)),\n", |
5792 | | - " 1: (\"Inflammatory\", (255, 255, 0)),\n", |
5793 | | - " 2: (\"Connective\", (0, 255, 0)),\n", |
5794 | | - " 3: (\"Dead\", (0, 0, 0)),\n", |
5795 | | - " 4: (\"non-neoplastic epithelial\", (0, 0, 255)),\n", |
| 5790 | + " 0: (\"background\", (255, 165, 0)),\n", |
| 5791 | + " 1: (\"neoplastic epithelial\", (255, 0, 0)),\n", |
| 5792 | + " 2: (\"Inflammatory\", (255, 255, 0)),\n", |
| 5793 | + " 3: (\"Connective\", (0, 255, 0)),\n", |
| 5794 | + " 4: (\"Dead\", (0, 0, 0)),\n", |
| 5795 | + " 5: (\"non-neoplastic epithelial\", (0, 0, 255)),\n", |
5796 | 5796 | "}\n", |
5797 | 5797 | "\n", |
5798 | 5798 | "# Create the overlay image\n", |
|
0 commit comments