Skip to content

Commit 8d94094

Browse files
authored
Merge branch 'develop' into dev-define-engines-abc
2 parents f6ba41f + 6b214fe commit 8d94094

25 files changed

+5901
-5348
lines changed

.github/workflows/python-package.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ jobs:
3030
sudo apt update
3131
sudo apt-get install -y libopenslide-dev openslide-tools libopenjp2-7 libopenjp2-tools
3232
python -m pip install --upgrade pip
33-
python -m pip install ruff==0.7.4 pytest pytest-cov pytest-runner
33+
python -m pip install ruff==0.8.1 pytest pytest-cov pytest-runner
3434
pip install -r requirements/requirements.txt
3535
- name: Cache tiatoolbox static assets
3636
uses: actions/cache@v3

.pre-commit-config.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ repos:
2323
- mdformat-black
2424
- mdformat-myst
2525
- repo: https://github.com/executablebooks/mdformat
26-
rev: 0.7.18
26+
rev: 0.7.19
2727
hooks:
2828
- id: mdformat
2929
# Optionally add plugins
@@ -60,7 +60,7 @@ repos:
6060
- id: rst-inline-touching-normal # Detect mistake of inline code touching normal text in rst.
6161
- repo: https://github.com/astral-sh/ruff-pre-commit
6262
# Ruff version.
63-
rev: v0.7.4
63+
rev: v0.8.1
6464
hooks:
6565
- id: ruff
6666
args: [--fix, --exit-non-zero-on-fix]

HISTORY.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@
6464

6565
### Major Updates and Feature Improvements
6666

67-
- Adds Python 3.11 support \[experimental\] #500
67+
- Adds Python 3.11 support [experimental] #500
6868
- Python 3.11 is not fully supported by `pytorch` https://github.com/pytorch/pytorch/issues/86566 and `openslide` https://github.com/openslide/openslide-python/pull/188
6969
- Removes Python 3.7 support
7070
- This allows upgrading all the dependencies which were dependent on an older version of Python.
@@ -181,7 +181,7 @@ None
181181
- Adds DICE metric
182182
- Adds [SCCNN](https://doi.org/10.1109/tmi.2016.2525803) architecture. \[[read the docs](https://tia-toolbox.readthedocs.io/en/develop/_autosummary/tiatoolbox.models.architecture.sccnn.SCCNN.html)\]
183183
- Adds [MapDe](https://arxiv.org/abs/1806.06970) architecture. \[[read the docs](https://tia-toolbox.readthedocs.io/en/develop/_autosummary/tiatoolbox.models.architecture.mapde.MapDe.html)\]
184-
- Adds support for reading MPP metadata from NGFF v0.4
184+
- Adds support for reading MPP metadata from NGFF v0.4
185185
- Adds enhancements to tiatoolbox.annotation.storage that are useful when using an AnnotationStore for visualization purposes.
186186

187187
### Changes to API
@@ -196,7 +196,7 @@ None
196196
- Fixes nucleus_segmentor_engine for boundary artefacts
197197
- Fixes the colorbar cropping in tests
198198
- Adds citation in README.md and CITATION.cff to Nature Communications Medicine paper
199-
- Fixes a bug #452 raised by @rogertrullo where only the numerator of the TIFF resolution tags was being read.
199+
- Fixes a bug #452 raised by @rogertrullo where only the numerator of the TIFF resolution tags was being read.
200200
- Fixes HoVer-Net+ post-processing to be inline with original work.
201201
- Fixes a bug where an exception would be raised if the OME XML is missing objective power.
202202

@@ -337,7 +337,7 @@ None
337337
### Major Updates and Feature Improvements
338338

339339
- Adds nucleus instance segmentation base class
340-
- Adds [HoVerNet](https://www.sciencedirect.com/science/article/abs/pii/S1361841519301045) architecture
340+
- Adds [HoVerNet](https://www.sciencedirect.com/science/article/abs/pii/S1361841519301045) architecture
341341
- Adds multi-task segmentor [HoVerNet+](https://arxiv.org/abs/2108.13904) model
342342
- Adds <a href="https://www.thelancet.com/journals/landig/article/PIIS2589-7500(2100180-1/fulltext">IDaRS</a> pipeline
343343
- Adds [SlideGraph](https://arxiv.org/abs/2110.06042) pipeline
@@ -358,7 +358,7 @@ None
358358

359359
### Bug Fixes and Other Changes
360360

361-
- Fixes Fix `filter_coordinates` read wrong resolutions for patch extraction
361+
- Fixes `filter_coordinates` read wrong resolutions for patch extraction
362362
- For `PatchPredictor`
363363
- `ioconfig` will supersede everything
364364
- if `ioconfig` is not provided
@@ -410,7 +410,7 @@ None
410410
- Adds dependencies for tiffile, imagecodecs, zarr.
411411
- Adds more stringent pre-commit checks
412412
- Moved local test files into `tiatoolbox/data`.
413-
- Fixed `Manifest.ini` and added `tiatoolbox/data`. This means that this directory will be downloaded with the package.
413+
- Fixed `Manifest.ini` and added `tiatoolbox/data`. This means that this directory will be downloaded with the package.
414414
- Using `pkg_resources` to properly load bundled resources (e.g. `target_image.png`) in `tiatoolbox.data`.
415415
- Removed duplicate code in `conftest.py` for downloading remote files. This is now in `tiatoolbox.data._fetch_remote_file`.
416416
- Fixes errors raised by new flake8 rules.
@@ -513,9 +513,9 @@ ______________________________________________________________________
513513
- `read_bounds` takes a tuple (left, top, right, bottom) of coordinates in baseline (level 0) reference frame and returns a region bounded by those.
514514
- `read_rect` takes one coordinate in baseline reference frame and an output size in pixels.
515515
- Adds `VirtualWSIReader` as a subclass of WSIReader which can be used to read visual fields (tiles).
516-
- `VirtualWSIReader` accepts ndarray or image path as input.
517-
- Adds MPP fall back to standard TIFF resolution tags with warning.
518-
- If OpenSlide cannot determine microns per pixel (`mpp`) from the metadata, checks the TIFF resolution units (TIFF tags: `ResolutionUnit`, `XResolution` and `YResolution`) to calculate MPP. Additionally, add function to estimate missing objective power if MPP is known of derived from TIFF resolution tags.
516+
- `VirtualWSIReader` accepts ndarray or image path as input.
517+
- Adds MPP fall back to standard TIFF resolution tags with warning.
518+
- If OpenSlide cannot determine microns per pixel (`mpp`) from the metadata, checks the TIFF resolution units (TIFF tags: `ResolutionUnit`, `XResolution` and `YResolution`) to calculate MPP. Additionally, add function to estimate missing objective power if MPP is known of derived from TIFF resolution tags.
519519
- Estimates missing objective power from MPP with warning.
520520
- Adds example notebooks for stain normalisation and WSI reader.
521521
- Adds caching to slide info property. This is done by checking if a private `self._m_info` exists and returning it if so, otherwise `self._info` is called to create the info for the first time (or to force regenerating) and the result is assigned to `self._m_info`. This could in future be made much simpler with the `functools.cached_property` decorator in Python 3.8+.

benchmarks/annotation_store.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -195,7 +195,7 @@
195195
"from typing import TYPE_CHECKING, Any\n",
196196
"\n",
197197
"import numpy as np\n",
198-
"from IPython.display import display\n",
198+
"from IPython.display import display_svg\n",
199199
"from matplotlib import patheffects\n",
200200
"from matplotlib import pyplot as plt\n",
201201
"from shapely import affinity\n",
@@ -444,7 +444,7 @@
444444
],
445445
"source": [
446446
"for n in range(4):\n",
447-
" display(cell_polygon(xy=(0, 0), n_points=20, repeat_first=False, seed=n))"
447+
" display_svg(cell_polygon(xy=(0, 0), n_points=20, repeat_first=False, seed=n))"
448448
]
449449
},
450450
{

docs/images/feature_extraction.png

3.45 MB
Loading

examples/05-patch-prediction.ipynb

Lines changed: 67 additions & 74 deletions
Large diffs are not rendered by default.

examples/06-semantic-segmentation.ipynb

Lines changed: 33 additions & 17 deletions
Large diffs are not rendered by default.

examples/07-advanced-modeling.ipynb

Lines changed: 88 additions & 44 deletions
Large diffs are not rendered by default.

examples/08-nucleus-instance-segmentation.ipynb

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@
158158
"source": [
159159
"### GPU or CPU runtime\n",
160160
"\n",
161-
"Processes in this notebook can be accelerated by using a GPU. Therefore, whether you are running this notebook on your system or Colab, you need to check and specify if you are using GPU or CPU hardware acceleration. In Colab, you need to make sure that the runtime type is set to GPU in the *\"Runtime→Change runtime type→Hardware accelerator\"*. If you are *not* using GPU, consider changing the `ON_GPU` flag to `Flase` value, otherwise, some errors will be raised when running the following cells.\n",
161+
"Processes in this notebook can be accelerated by using a GPU. Therefore, whether you are running this notebook on your system or Colab, you need to check and specify appropriate [device](https://pytorch.org/docs/stable/tensor_attributes.html#torch.device) e.g., \"cuda\" or \"cpu\" whether you are using GPU or CPU. In Colab, you need to make sure that the runtime type is set to GPU in the *\"Runtime→Change runtime type→Hardware accelerator\"*. If you are *not* using GPU, consider changing the `device` flag to `cpu` value, otherwise, some errors will be raised when running the following cells.\n",
162162
"\n"
163163
]
164164
},
@@ -173,8 +173,7 @@
173173
},
174174
"outputs": [],
175175
"source": [
176-
"# Should be changed to False if no cuda-enabled GPU is available.\n",
177-
"ON_GPU = True # Default is True."
176+
"device = \"cuda\" # Choose appropriate device"
178177
]
179178
},
180179
{
@@ -356,7 +355,7 @@
356355
" [img_file_name],\n",
357356
" save_dir=\"sample_tile_results/\",\n",
358357
" mode=\"tile\",\n",
359-
" on_gpu=ON_GPU,\n",
358+
" device=device,\n",
360359
" crash_on_exception=True,\n",
361360
")"
362361
]
@@ -386,7 +385,7 @@
386385
"\n",
387386
"- `mode`: the mode of inference which can be set to either `'tile'` or `'wsi'` for plain histology images or structured whole slides images, respectively.\n",
388387
"\n",
389-
"- `on_gpu`: can be `True` or `False` to dictate running the computations on GPU or CPU.\n",
388+
"- `device`: specify appropriate [device](https://pytorch.org/docs/stable/tensor_attributes.html#torch.device) e.g., \"cuda\", \"cuda:0\", \"mps\", \"cpu\" etc.\n",
390389
"\n",
391390
"- `crash_on_exception`: If set to `True`, the running loop will crash if there is an error during processing a WSI. Otherwise, the loop will move on to the next image (wsi) for processing. We suggest that you first make sure that the prediction is working as expected by testing it on a couple of inputs and then set this flag to `False` to process large cohorts of inputs.\n",
392391
"\n",
@@ -5615,13 +5614,13 @@
56155614
")\n",
56165615
"\n",
56175616
"# WSI prediction\n",
5618-
"# if ON_GPU=False, this part will take more than a couple of hours to process.\n",
5617+
"# if device=\"cpu\", this part will take more than a couple of hours to process.\n",
56195618
"wsi_output = inst_segmentor.predict(\n",
56205619
" [wsi_file_name],\n",
56215620
" masks=None,\n",
56225621
" save_dir=\"sample_wsi_results/\",\n",
56235622
" mode=\"wsi\",\n",
5624-
" on_gpu=ON_GPU,\n",
5623+
" device=device,\n",
56255624
" crash_on_exception=True,\n",
56265625
")"
56275626
]
@@ -5638,7 +5637,7 @@
56385637
"1. Setting `mode='wsi'` in the arguments to `predict` tells the program that the input are in WSI format.\n",
56395638
"1. `masks=None`: the `masks` argument to the `predict` function is handled in the same way as the imgs argument. It is a list of paths to the desired image masks. Patches from `imgs` are only processed if they are within a masked area of their corresponding `masks`. If not provided (`masks=None`), then a tissue mask is generated for whole-slide images or, for image tiles, the entire image is processed.\n",
56405639
"\n",
5641-
"The above code cell might take a while to process, especially if `ON_GPU=False`. The processing time mostly depends on the size of the input WSI.\n",
5640+
"The above code cell might take a while to process, especially if `device=\"cpu\"`. The processing time mostly depends on the size of the input WSI.\n",
56425641
"The output, `wsi_output`, of `predict` contains a list of paths to the input WSIs and the corresponding output results saved on disk. The results for nucleus instance segmentation in `'wsi'` mode are stored in a Python dictionary, in the same way as was done for `'tile'` mode.\n",
56435642
"We use `joblib` to load the outputs for this sample WSI and then inspect the results dictionary.\n",
56445643
"\n"
@@ -5788,11 +5787,12 @@
57885787
")\n",
57895788
"\n",
57905789
"color_dict = {\n",
5791-
" 0: (\"neoplastic epithelial\", (255, 0, 0)),\n",
5792-
" 1: (\"Inflammatory\", (255, 255, 0)),\n",
5793-
" 2: (\"Connective\", (0, 255, 0)),\n",
5794-
" 3: (\"Dead\", (0, 0, 0)),\n",
5795-
" 4: (\"non-neoplastic epithelial\", (0, 0, 255)),\n",
5790+
" 0: (\"background\", (255, 165, 0)),\n",
5791+
" 1: (\"neoplastic epithelial\", (255, 0, 0)),\n",
5792+
" 2: (\"Inflammatory\", (255, 255, 0)),\n",
5793+
" 3: (\"Connective\", (0, 255, 0)),\n",
5794+
" 4: (\"Dead\", (0, 0, 0)),\n",
5795+
" 5: (\"non-neoplastic epithelial\", (0, 0, 255)),\n",
57965796
"}\n",
57975797
"\n",
57985798
"# Create the overlay image\n",

examples/09-multi-task-segmentation.ipynb

Lines changed: 14 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -105,7 +105,7 @@
105105
},
106106
{
107107
"cell_type": "code",
108-
"execution_count": 2,
108+
"execution_count": null,
109109
"metadata": {
110110
"id": "UEIfjUTaJLPj",
111111
"outputId": "e4f383f2-306d-4afd-cd82-fec14a184941",
@@ -169,13 +169,13 @@
169169
"source": [
170170
"### GPU or CPU runtime\n",
171171
"\n",
172-
"Processes in this notebook can be accelerated by using a GPU. Therefore, whether you are running this notebook on your system or Colab, you need to check and specify whether you are using GPU or CPU hardware acceleration. In Colab, make sure that the runtime type is set to GPU, using the menu *Runtime→Change runtime type→Hardware accelerator*. If you are *not* using GPU, change the `ON_GPU` flag to `False`.\n",
172+
"Processes in this notebook can be accelerated by using a GPU. Therefore, whether you are running this notebook on your system or Colab, you need to check and specify appropriate [device](https://pytorch.org/docs/stable/tensor_attributes.html#torch.device) e.g., \"cuda\" or \"cpu\" whether you are using GPU or CPU. In Colab, you need to make sure that the runtime type is set to GPU in the *\"Runtime→Change runtime type→Hardware accelerator\"*. If you are *not* using GPU, consider changing the `device` flag to `cpu` value, otherwise, some errors will be raised when running the following cells.\n",
173173
"\n"
174174
]
175175
},
176176
{
177177
"cell_type": "code",
178-
"execution_count": 3,
178+
"execution_count": null,
179179
"metadata": {
180180
"id": "haTA_oQIY1Vy",
181181
"tags": [
@@ -184,8 +184,7 @@
184184
},
185185
"outputs": [],
186186
"source": [
187-
"# Should be changed to False if no cuda-enabled GPU is available.\n",
188-
"ON_GPU = True # Default is True."
187+
"device = \"cuda\" # Choose appropriate device"
189188
]
190189
},
191190
{
@@ -205,7 +204,7 @@
205204
},
206205
{
207206
"cell_type": "code",
208-
"execution_count": 4,
207+
"execution_count": null,
209208
"metadata": {
210209
"colab": {
211210
"base_uri": "https://localhost:8080/"
@@ -260,7 +259,7 @@
260259
},
261260
{
262261
"cell_type": "code",
263-
"execution_count": 6,
262+
"execution_count": null,
264263
"metadata": {
265264
"colab": {
266265
"base_uri": "https://localhost:8080/"
@@ -335,7 +334,7 @@
335334
},
336335
{
337336
"cell_type": "code",
338-
"execution_count": 7,
337+
"execution_count": null,
339338
"metadata": {
340339
"colab": {
341340
"base_uri": "https://localhost:8080/"
@@ -390,7 +389,7 @@
390389
" [img_file_name],\n",
391390
" save_dir=global_save_dir / \"sample_tile_results\",\n",
392391
" mode=\"tile\",\n",
393-
" on_gpu=ON_GPU,\n",
392+
" device=device,\n",
394393
" crash_on_exception=True,\n",
395394
")"
396395
]
@@ -418,7 +417,7 @@
418417
"\n",
419418
"- `mode`: the mode of inference which can be set to either `'tile'` or `'wsi'`, for plain histology images or structured whole slides images, respectively.\n",
420419
"\n",
421-
"- `on_gpu`: can be either `True` or `False` to dictate running the computations on GPU or CPU.\n",
420+
"- `device`: specify appropriate [device](https://pytorch.org/docs/stable/tensor_attributes.html#torch.device) e.g., \"cuda\", \"cuda:0\", \"mps\", \"cpu\" etc.\n",
422421
"\n",
423422
"- `crash_on_exception`: If set to `True`, the running loop will crash if there is an error during processing a WSI. Otherwise, the loop will move on to the next image (wsi) for processing. We suggest that you first make sure that prediction is working as expected by testing it on a couple of inputs and then set this flag to `False` to process large cohorts of inputs.\n",
424423
"\n",
@@ -430,7 +429,7 @@
430429
},
431430
{
432431
"cell_type": "code",
433-
"execution_count": 8,
432+
"execution_count": null,
434433
"metadata": {
435434
"colab": {
436435
"base_uri": "https://localhost:8080/",
@@ -546,7 +545,7 @@
546545
},
547546
{
548547
"cell_type": "code",
549-
"execution_count": 10,
548+
"execution_count": null,
550549
"metadata": {
551550
"colab": {
552551
"base_uri": "https://localhost:8080/"
@@ -595,7 +594,7 @@
595594
" masks=None,\n",
596595
" save_dir=global_save_dir / \"sample_wsi_results/\",\n",
597596
" mode=\"wsi\",\n",
598-
" on_gpu=ON_GPU,\n",
597+
" device=device,\n",
599598
" crash_on_exception=True,\n",
600599
")"
601600
]
@@ -612,13 +611,13 @@
612611
"1. Setting `mode='wsi'` in the `predict` function indicates that we are predicting region segmentations for inputs in the form of WSIs.\n",
613612
"1. `masks=None` in the `predict` function: the `masks` argument is a list of paths to the desired image masks. Patches from `imgs` are only processed if they are within a masked area of their corresponding `masks`. If not provided (`masks=None`), then either a tissue mask is automatically generated for whole-slide images or the entire image is processed as a collection of image tiles.\n",
614613
"\n",
615-
"The above cell might take a while to process, especially if you have set `ON_GPU=False`. The processing time depends on the size of the input WSI and the selected resolution. Here, we have not specified any values and we use the assumed input resolution (20x) of HoVer-Net+.\n",
614+
"The above cell might take a while to process, especially if you have set `device=\"cpu\"`. The processing time depends on the size of the input WSI and the selected resolution. Here, we have not specified any values and we use the assumed input resolution (20x) of HoVer-Net+.\n",
616615
"\n"
617616
]
618617
},
619618
{
620619
"cell_type": "code",
621-
"execution_count": 12,
620+
"execution_count": null,
622621
"metadata": {
623622
"colab": {
624623
"base_uri": "https://localhost:8080/",

0 commit comments

Comments
 (0)