Skip to content

Commit 9f7a4ad

Browse files
authored
Minor updates to ensure compatibility with HSDK v3.5.0 (#549)
* Minor updates to ensure compatibility with HSDK v3.5.0 Min Python version is now 3.10 Use of NumPy can_cast with Python type is removed Update the DICOM series to volume image to use pydicom apply_rescale directly Tested Jupyter notebooks (using locally built SDK for monai-deploy package command) Signed-off-by: M Q <[email protected]> * Fixing Flake8 complaint which happened on GitHub pull request check Signed-off-by: M Q <[email protected]> * Keep it simple and use 3.10 Signed-off-by: M Q <[email protected]> * Set readdoc to use python 3.10 Signed-off-by: M Q <[email protected]> * Keeping Python 3.9 as min for the docbuild to succeed for now. The doc build requirements need to be fully examined and updated in another PR Signed-off-by: M Q <[email protected]> * Reverting to ealier version Signed-off-by: M Q <[email protected]> * Per CoPilot suggestion but also with correction Signed-off-by: M Q <[email protected]> * Fix Flake8 complaint in GenAI created code Signed-off-by: M Q <[email protected]> * Push the latest test run of the Notebook, no code change Signed-off-by: M Q <[email protected]> * Address concerns that the rescaled pixel array from DICOM is of type float64. In the last version, rescale slope and intercept values were tried to cast to the smallest can_cast Nunpy type. Using pydicom apply_rescale loses this as it returns float64 if rescale slope and intercept are applied. So, enhancing the code will test for equivalence before "unsafely" casting the data. This will NOT reduce the mem usage of this operator as a new array will be created by casting from float64, and there is a small overhead for the checks. However, there are cases where downstream custom operators may save memory footprint. Signed-off-by: M Q <[email protected]> * Address mem usage concerns by processing and capping each slice's ndarry to min size. Signed-off-by: M Q <[email protected]> * Fix a typo that was missed by codespell but flagged by CoPilot. Signed-off-by: M Q <[email protected]> * Logging level change to debug. Signed-off-by: M Q <[email protected]> --------- Signed-off-by: M Q <[email protected]>
1 parent dec9305 commit 9f7a4ad

File tree

12 files changed

+1321
-1100
lines changed

12 files changed

+1321
-1100
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ pip install monai-deploy-app-sdk
4141
- This SDK depends on [NVIDIA Holoscan SDK](https://pypi.org/project/holoscan/) for its core implementation as well as its CLI, hence inherits its prerequisites, e.g. Ubuntu 22.04 with glibc 2.35 on X86-64 and NVIDIA dGPU drivers version 535 or above.
4242
- [CUDA 12.2](https://developer.nvidia.com/cuda-12-2-0-download-archive) or above is required along with a supported NVIDIA GPU with at least 8GB of video RAM.
4343
- If inference is not used in an example application and a GPU is not installed, at least [CUDA 12 runtime](https://pypi.org/project/nvidia-cuda-runtime-cu12/) is required, as this is one of the requirements of Holoscan SDK. In addition, the `LIB_LIBRARY_PATH` must be set to include the installed shared library, e.g. in a Python 3.10 env, ```export LD_LIBRARY_PATH=`pwd`/.venv/lib/python3.10/site-packages/nvidia/cuda_runtime/lib:$LD_LIBRARY_PATH```
44-
- Python: 3.9 to 3.12
44+
- Python: 3.10 to 3.13
4545

4646
## Getting Started
4747

docs/source/getting_started/tutorials/mednist_app.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,9 +5,9 @@ This tutorial demos the process of packaging up a trained model using MONAI Depl
55
## Setup
66

77
```bash
8-
# Create a virtual environment with Python 3.9.
8+
# Create a virtual environment with Python 3.10.
99
# Skip if you are already in a virtual environment.
10-
conda create -n mednist python=3.9 pytorch jupyterlab cudatoolkit=12.2 -c pytorch -c conda-forge
10+
conda create -n mednist python=3.10 pytorch jupyterlab cudatoolkit=12.2 -c pytorch -c conda-forge
1111
conda activate mednist
1212

1313
# Launch JupyterLab if you want to work on Jupyter Notebook

docs/source/getting_started/tutorials/monai_bundle_app.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,9 +5,9 @@ This tutorial shows how to create an organ segmentation application for a PyTorc
55
## Setup
66

77
```bash
8-
# Create a virtual environment with Python 3.9.
8+
# Create a virtual environment with Python 3.10.
99
# Skip if you are already in a virtual environment.
10-
conda create -n monai python=3.9 pytorch torchvision jupyterlab cudatoolkit=12.2 -c pytorch -c conda-forge
10+
conda create -n monai python=3.10 pytorch torchvision jupyterlab cudatoolkit=12.2 -c pytorch -c conda-forge
1111
conda activate monai
1212

1313
# Launch JupyterLab if you want to work on Jupyter Notebook

docs/source/getting_started/tutorials/multi_model_app.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,9 +7,9 @@ The models used in this example are trained with MONAI, and are packaged in the
77
## Setup
88

99
```bash
10-
# Create a virtual environment with Python 3.9.
10+
# Create a virtual environment with Python 3.10.
1111
# Skip if you are already in a virtual environment.
12-
conda create -n monai python=3.9 pytorch torchvision jupyterlab cudatoolkit=12.2 -c pytorch -c conda-forge
12+
conda create -n monai python=3.10 pytorch torchvision jupyterlab cudatoolkit=12.2 -c pytorch -c conda-forge
1313
conda activate monai
1414

1515
# Launch JupyterLab if you want to work on Jupyter Notebook

docs/source/getting_started/tutorials/segmentation_app.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,9 +7,9 @@ Please note that the following steps are for demonstration purpose. The code pul
77
## Setup
88

99
```bash
10-
# Create a virtual environment with Python 3.9.
10+
# Create a virtual environment with Python 3.10.
1111
# Skip if you are already in a virtual environment.
12-
conda create -n monai python=3.9 pytorch torchvision jupyterlab cudatoolkit=12.2 -c pytorch -c conda-forge
12+
conda create -n monai python=3.10 pytorch torchvision jupyterlab cudatoolkit=12.2 -c pytorch -c conda-forge
1313
conda activate monai
1414

1515
# Launch JupyterLab if you want to work on Jupyter Notebook

docs/source/getting_started/tutorials/segmentation_clara-viz_app.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,9 +5,9 @@ This tutorial shows how to create an organ segmentation application for a PyTorc
55
## Setup
66

77
```bash
8-
# Create a virtual environment with Python 3.9.
8+
# Create a virtual environment with Python 3.10.
99
# Skip if you are already in a virtual environment.
10-
conda create -n monai python=3.9 pytorch torchvision jupyterlab cudatoolkit=12.2 -c pytorch -c conda-forge
10+
conda create -n monai python=3.10 pytorch torchvision jupyterlab cudatoolkit=12.2 -c pytorch -c conda-forge
1111
conda activate monai
1212

1313
# Launch JupyterLab if you want to work on Jupyter Notebook

docs/source/getting_started/tutorials/simple_app.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,9 +5,9 @@ This tutorial shows how a simple image processing application can be created wit
55
## Setup
66

77
```bash
8-
# Create a virtual environment with Python 3.9.
8+
# Create a virtual environment with Python 3.10.
99
# Skip if you are already in a virtual environment.
10-
conda create -n monai python=3.9 pytorch torchvision jupyterlab cudatoolkit=12.2 -c pytorch -c conda-forge
10+
conda create -n monai python=3.10 pytorch torchvision jupyterlab cudatoolkit=12.2 -c pytorch -c conda-forge
1111
conda activate monai
1212

1313
# Launch JupyterLab if you want to work on Jupyter Notebook

monai/deploy/operators/dicom_data_loader_operator.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -437,10 +437,10 @@ def test():
437437
print(f" 'SeriesDescription': {ds.SeriesDescription if ds.SeriesDescription else ''}")
438438
print(
439439
" 'IssuerOfPatientID':"
440-
f" {ds.get('IssuerOfPatientID', '').repval if ds.get('IssuerOfPatientID', '') else '' }"
440+
f" {ds.get('IssuerOfPatientID', '').repval if ds.get('IssuerOfPatientID', '') else ''}"
441441
)
442442
try:
443-
print(f" 'IssuerOfPatientID': {ds.IssuerOfPatientID if ds.IssuerOfPatientID else '' }")
443+
print(f" 'IssuerOfPatientID': {ds.IssuerOfPatientID if ds.IssuerOfPatientID else ''}")
444444
except AttributeError:
445445
print(
446446
" If the IssuerOfPatientID does not exist, ds.IssuerOfPatientID would throw AttributeError."

monai/deploy/operators/dicom_series_to_volume_operator.py

Lines changed: 73 additions & 76 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,11 @@
1616

1717
import numpy as np
1818

19+
from monai.deploy.utils.importutil import optional_import
20+
21+
apply_presentation_lut, _ = optional_import("pydicom.pixels", name="apply_presentation_lut")
22+
apply_rescale, _ = optional_import("pydicom.pixels", name="apply_rescale")
23+
1924
from monai.deploy.core import ConditionType, Fragment, Operator, OperatorSpec
2025
from monai.deploy.core.domain.dicom_series_selection import StudySelectedSeries
2126
from monai.deploy.core.domain.image import Image
@@ -105,91 +110,83 @@ def generate_voxel_data(self, series):
105110
Returns:
106111
A 3D numpy tensor representing the volumetric data.
107112
"""
113+
114+
def _get_rescaled_pixel_array(sop_instance):
115+
# Use pydicom utility to apply a modality lookup table or rescale operator to the pixel array.
116+
# The pydicom Dataset is required which can be obtained from the first slice's native SOP instance.
117+
# If Modality LUT is present the return array is of np.uint8 or np.uint16, and if Rescale
118+
# Intercept and Rescale Slope are present, np.float64.
119+
# If the pixel array is already in the correct type, the return array is the same as the input array.
120+
121+
if not sop_instance:
122+
return np.array([])
123+
124+
native_sop = None
125+
try:
126+
native_sop = sop_instance.get_native_sop_instance()
127+
rescaled_pixel_data = apply_rescale(sop_instance.get_pixel_array(), native_sop)
128+
# In our use cases, pixel data will be interpreted as if MONOCHROME2, hence need to
129+
# apply the presentation lut.
130+
rescaled_pixel_data = apply_presentation_lut(rescaled_pixel_data, native_sop)
131+
except Exception as e:
132+
logging.error(f"Failed to apply rescale to DICOM volume: {e}")
133+
raise RuntimeError("Failed to apply rescale to DICOM volume.") from e
134+
135+
# The following tests are expecting the array is already of the Numpy type.
136+
if rescaled_pixel_data.dtype == np.uint8 or rescaled_pixel_data.dtype == np.uint16:
137+
logging.debug("Rescaled pixel array is already of type uint8 or uint16.")
138+
# Check if casting to uint16 and back to float results in the same values.
139+
elif np.all(rescaled_pixel_data > 0) and np.array_equal(
140+
rescaled_pixel_data, rescaled_pixel_data.astype(np.uint16)
141+
):
142+
logging.debug("Rescaled pixel array can be safely casted to uint16 with equivalence test.")
143+
rescaled_pixel_data = rescaled_pixel_data.astype(dtype=np.uint16)
144+
# Check if casting to int16 and back to float results in the same values.
145+
elif np.array_equal(rescaled_pixel_data, rescaled_pixel_data.astype(np.int16)):
146+
logging.debug("Rescaled pixel array can be safely casted to int16 with equivalence test.")
147+
rescaled_pixel_data = rescaled_pixel_data.astype(dtype=np.int16)
148+
# Check casting to float32 with equivalence test
149+
elif np.array_equal(rescaled_pixel_data, rescaled_pixel_data.astype(np.float32)):
150+
logging.debug("Rescaled pixel array can be safely casted to float32 with equivalence test.")
151+
rescaled_pixel_data = rescaled_pixel_data.astype(np.float32)
152+
else:
153+
logging.debug("Rescaled pixel data remains as of type float64.")
154+
155+
return rescaled_pixel_data
156+
108157
slices = series.get_sop_instances()
109158
# The sop_instance get_pixel_array() returns a 2D NumPy array with index order
110159
# of `HW`. The pixel array of all instances will be stacked along the first axis,
111160
# so the final 3D NumPy array will have index order of [DHW]. This is consistent
112161
# with the NumPy array returned from the ITK GetArrayViewFromImage on the image
113162
# loaded from the same DICOM series.
114-
vol_data = np.stack([s.get_pixel_array() for s in slices], axis=0)
115-
# The above get_pixel_array() already considers the PixelRepresentation attribute,
116-
# 0 is unsigned int, 1 is signed int
117-
if slices[0][0x0028, 0x0103].value == 0:
118-
vol_data = vol_data.astype(np.uint16)
119-
120-
# For now we support monochrome image only, for which DICOM Photometric Interpretation
121-
# (0028,0004) has defined terms, MONOCHROME1 and MONOCHROME2, with the former being:
122-
# Pixel data represent a single monochrome image plane. The minimum sample value is
123-
# intended to be displayed as white after any VOI gray scale transformations have been
124-
# performed. See PS3.4. This value may be used only when Samples per Pixel (0028,0002)
125-
# has a value of 1. May be used for pixel data in a Native (uncompressed) or Encapsulated
126-
# (compressed) format; see Section 8.2 in PS3.5.
127-
# and for the latter "The minimum sample value is intended to be displayed as black"
128-
#
129-
# In this function, pixel data will be interpreted as if MONOCHROME2, hence inverting
130-
# MONOCHROME1 for the final voxel data.
131-
132-
photometric_interpretation = (
133-
slices[0].get_native_sop_instance().get("PhotometricInterpretation", "").strip().upper()
134-
)
135-
presentation_lut_shape = slices[0].get_native_sop_instance().get("PresentationLUTShape", "").strip().upper()
136-
137-
if not photometric_interpretation:
138-
logging.warning("Cannot get value of attribute Photometric Interpretation.")
139-
140-
if photometric_interpretation != "MONOCHROME2":
141-
if photometric_interpretation == "MONOCHROME1" or presentation_lut_shape == "INVERSE":
142-
logging.debug("Applying INVERSE transformation as required for MONOCHROME1 image.")
143-
vol_data = np.amax(vol_data) - vol_data
144-
else:
145-
raise ValueError(
146-
f"Cannot process pixel data with Photometric Interpretation of {photometric_interpretation}."
147-
)
163+
# The below code loads all slice pixel data into a list of NumPy arrays in memory
164+
# before stacking them into a single 3D volume. This can be inefficient for series
165+
# with many slices.
166+
if not slices:
167+
return np.array([])
148168

149-
# Rescale Intercept and Slope attributes might be missing, but safe to assume defaults.
169+
# Get shape and dtype from the first slice to pre-allocate numpy array.
150170
try:
151-
intercept = slices[0][0x0028, 0x1052].value
152-
except KeyError:
153-
intercept = 0
171+
first_slice_pixel_array = _get_rescaled_pixel_array(slices[0])
172+
vol_shape = (len(slices),) + first_slice_pixel_array.shape
173+
dtype = first_slice_pixel_array.dtype
174+
except Exception as e:
175+
logging.error(f"Failed to get pixel array from the first slice: {e}")
176+
raise
177+
178+
# Pre-allocate the volume data array.
179+
vol_data = np.empty(vol_shape, dtype=dtype)
180+
vol_data[0] = first_slice_pixel_array
181+
182+
# Read subsequent slices directly into the pre-allocated array.
183+
for i, s in enumerate(slices[1:], 1):
184+
try:
185+
vol_data[i] = _get_rescaled_pixel_array(s)
186+
except Exception as e:
187+
logging.error(f"Failed to get pixel array from slice {i}: {e}")
188+
raise
154189

155-
try:
156-
slope = slices[0][0x0028, 0x1053].value
157-
except KeyError:
158-
slope = 1
159-
160-
# check if vol_data, intercept, and slope can be cast to uint16 without data loss
161-
if (
162-
np.can_cast(vol_data, np.uint16, casting="safe")
163-
and np.can_cast(intercept, np.uint16, casting="safe")
164-
and np.can_cast(slope, np.uint16, casting="safe")
165-
):
166-
logging.info("Casting to uint16")
167-
vol_data = np.array(vol_data, dtype=np.uint16)
168-
intercept = np.uint16(intercept)
169-
slope = np.uint16(slope)
170-
elif (
171-
np.can_cast(vol_data, np.float32, casting="safe")
172-
and np.can_cast(intercept, np.float32, casting="safe")
173-
and np.can_cast(slope, np.float32, casting="safe")
174-
):
175-
logging.info("Casting to float32")
176-
vol_data = np.array(vol_data, dtype=np.float32)
177-
intercept = np.float32(intercept)
178-
slope = np.float32(slope)
179-
elif (
180-
np.can_cast(vol_data, np.float64, casting="safe")
181-
and np.can_cast(intercept, np.float64, casting="safe")
182-
and np.can_cast(slope, np.float64, casting="safe")
183-
):
184-
logging.info("Casting to float64")
185-
vol_data = np.array(vol_data, dtype=np.float64)
186-
intercept = np.float64(intercept)
187-
slope = np.float64(slope)
188-
189-
if slope != 1:
190-
vol_data = slope * vol_data
191-
192-
vol_data += intercept
193190
return vol_data
194191

195192
def create_volumetric_image(self, vox_data, metadata):

notebooks/tutorials/01_simple_app.ipynb

Lines changed: 544 additions & 269 deletions
Large diffs are not rendered by default.

0 commit comments

Comments
 (0)