Skip to content

Commit 30d349d

Browse files
authored
Update deploy-medimageparse3d.md
WIP
1 parent 10d16a3 commit 30d349d

File tree

1 file changed

+62
-33
lines changed

1 file changed

+62
-33
lines changed

articles/ai-foundry/how-to/healthcare-ai/deploy-medimageparse3d.md

Lines changed: 62 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -14,11 +14,11 @@ author: msakande
1414

1515
---
1616

17-
# How to use MedImageParse healthcare AI model for segmentation of medical images
17+
# How to use MedImageParse 3D healthcare AI model for segmentation of medical images
1818

1919
[!INCLUDE [health-ai-models-meddev-disclaimer](../../includes/health-ai-models-meddev-disclaimer.md)]
2020

21-
In this article, you learn how to deploy MedImageParse as an online endpoint for real-time inference and issue a basic call to the API. The steps you take are:
21+
In this article, you learn how to deploy MedImageParse 3D as an online endpoint for real-time inference and issue a basic call to the API. The steps you take are:
2222

2323
* Deploy the model to a self-hosted managed compute.
2424
* Grant permissions to the endpoint.
@@ -30,18 +30,18 @@ Similar to our [MedImageParse model](deploy-medimageparse.md) model, MedImagePar
3030

3131
## Prerequisites
3232

33-
To use the MedImageParse model, you need the following prerequisites:
33+
To use the MedImageParse 3D model, you need the following prerequisites:
3434

3535
### A model deployment
3636

3737
**Deployment to a self-hosted managed compute**
3838

39-
MedImageParse model can be deployed to our self-hosted managed inference solution, which allows you to customize and control all the details about how the model is served. You can deploy the model through the catalog UI (in [Azure AI Foundry](https://aka.ms/healthcaremodelstudio) or [Azure Machine Learning studio](https://ml.azure.com/model/catalog)) or deploy programmatically.
39+
MedImageParse 3D model can be deployed to our self-hosted managed inference solution, which allows you to customize and control all the details about how the model is served. You can deploy the model through the catalog UI (in [Azure AI Foundry](https://aka.ms/healthcaremodelstudio) or [Azure Machine Learning studio](https://ml.azure.com/model/catalog)) or deploy programmatically.
4040

4141
To __deploy the model through the UI__:
4242

4343
1. Go to the catalog.
44-
1. Search for _MedImageParse_ and select the model card.
44+
1. Search for _MedImageParse3D_ and select the model card.
4545
1. On the model's overview page, select __Deploy__.
4646
1. If given the option to choose between serverless API deployment and deployment using a managed compute, select **Managed Compute**.
4747
1. Fill out the details in the deployment window.
@@ -58,7 +58,7 @@ In this section, you consume the model and make basic calls to it.
5858

5959
### Use REST API to consume the model
6060

61-
Consume the MedImageParse segmentation model as a REST API, using simple GET requests or by creating a client as follows:
61+
Consume the MedImageParse 3D segmentation model as a REST API, using simple GET requests or by creating a client as follows:
6262

6363
```python
6464
from azure.ai.ml import MLClient
@@ -75,26 +75,25 @@ In the deployment configuration, you get to choose authentication method. This e
7575

7676
Once the model is deployed, use the following code to send data and retrieve segmentation masks.
7777

78+
TODO: the example here follows MedImageParse (2D) where it uses `ml_client_workspace.online_endpoints.invoke` instead of `urllib.request.urlopen` as in this [notebook](https://dev.azure.com/msazuredev/HLS%20AI%20Platform/_git/3dMedImageParseDeployment?path=/notebooks/03.model.endpoint.api.call.ipynb&version=GBmain&line=192&lineEnd=193&lineStartColumn=1&lineEndColumn=1&lineStyle=plain&_a=contents). Verify the correct call pattern.
79+
7880
```python
7981
import base64
8082
import json
8183
import os
8284

83-
sample_image_xray = os.path.join(image_path)
84-
85-
def read_image(image_path):
86-
with open(image_path, "rb") as f:
87-
return f.read()
85+
sample_image = "example.nii.gz"
86+
with open(sample_image, "rb") as image_file:
87+
base64_image = base64.b64encode(image_file.read()).decode('utf-8')
8888

89-
sample_image = "sample_image.png"
9089
data = {
9190
"input_data": {
9291
"columns": [ "image", "text" ],
9392
"index": [ 0 ],
9493
"data": [
9594
[
96-
base64.encodebytes(read_image(sample_image)).decode("utf-8"),
97-
"neoplastic cells in breast pathology & inflammatory cells"
95+
base64_image,
96+
"pancreas"
9897
]
9998
]
10099
}
@@ -113,8 +112,10 @@ response = ml_client_workspace.online_endpoints.invoke(
113112
)
114113
```
115114

116-
## Use MedImageParse REST API
117-
MedImageParse model assumes a simple single-turn interaction where one request produces one response.
115+
## Use MedImageParse 3D REST API
116+
TODO: verify all contents in this section
117+
118+
MedImageParse 3D model assumes a simple single-turn interaction where one request produces one response.
118119

119120
### Request schema
120121

@@ -130,11 +131,11 @@ The `input_data` object contains the following fields:
130131
| ------------- | -------------- | :-----------------:| ----------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
131132
| `columns` | `list[string]` | Y | `"image"`, `"text"` | An object containing the strings mapping data to inputs passed to the model.|
132133
| `index` | `integer` | Y | 0 - 256 | Count of inputs passed to the model. You're limited by how much data can be passed in a single POST request, which depends on the size of your images. Therefore, it's reasonable to keep this number in the dozens. |
133-
| `data` | `list[list[string]]` | Y | "" | The list contains the items passed to the model which is defined by the index parameter. Each item is a list of two strings. The order is defined by the `columns` parameter. The `text` string contains the prompt text. The `image` string is the image bytes encoded using base64 and decoded as utf-8 string. <br/>**NOTE**: The image should be resized to `1024x1024` pixels before submitting to the model, preserving the aspect ratio. Empty space should be padded with black pixels. See the [Generating Segmentation for a Variety of Imaging Modalities](https://aka.ms/healthcare-ai-examples-mip-examples) sample notebook for an example of resizing and padding code.<br/><br/> The input text is a string containing multiple sentences separated by the special character `&`. For example: `tumor core & enhancing tumor & non-enhancing tumor`. In this case, there are three sentences, so the output consists of three images with segmentation masks. |
134+
| `data` | `list[list[string]]` | Y | "" | The list contains the items passed to the model which is defined by the index parameter. Each item is a list of two strings. The order is defined by the `columns` parameter. The `text` string contains the prompt text. The `image` string is the input volume in NIfTI format encoded using base64 and decoded as utf-8 string. The input text is a string containing the target (e.g., organ) to be segmented. |
134135

135136
### Request example
136137

137-
**Requesting segmentation of all cells in a pathology image**
138+
**Requesting segmentation of all cells in a pathology image**
138139
```JSON
139140
{
140141
"input_data": {
@@ -144,16 +145,16 @@ The `input_data` object contains the following fields:
144145
],
145146
"index":[0],
146147
"data": [
147-
["iVBORw0KGgoAAAANSUhEUgAAAAIAAAACCAYAAABytg0kAAAAAXNSR0IArs4c6QAAAARnQU1BAACx\njwv8YQUAAAAJcEhZcwAAFiUAABYlAUlSJPAAAAAbSURBVBhXY/gUoPS/fhfDfwaGJe///9/J8B8A\nVGwJ5VDvPeYAAAAASUVORK5CYII=\n",
148-
"neoplastic & inflammatory cells "]
148+
["iVBORw0KGgoAAAAN...",
149+
"pancreas"]
149150
]
150151
}
151152
}
152153
```
153154

154155
### Response schema
155156

156-
Response payload is a list of JSON-formatted strings, each corresponding to a submitted image. Each string contains a `segmentation_object` object.
157+
Response payload is a list of JSON-formatted strings, each corresponding to a submitted volume. Each string contains a `segmentation_object` object.
157158

158159
`segmentation_object` contains the following fields:
159160

@@ -171,35 +172,63 @@ Response payload is a list of JSON-formatted strings, each corresponding to a su
171172
| `dtype` | `string` | An instance of the [NumPy dtype class](https://numpy.org/doc/stable/reference/arrays.dtypes.html) serialized to a string. Describes the data packing in the data array. |
172173

173174
### Response example
174-
**A simple inference requesting segmentation of two objects**
175+
The requested segmentation mask is stored in NIfTI, represented by an encoded string.
176+
177+
TODO: verify the value of nifti_file is a string or a json object (without the quote).
175178
```JSON
176179
[
177180
{
178-
"image_features": "{
179-
'data': '4oCwUE5HDQoa...',
180-
'shape': [2, 1024, 1024],
181-
'dtype': 'uint8'}",
182-
"text_features": ['liver', 'pancreas']
181+
"nifti_file": "{'data': 'H4sIAAAAAAAE...'}"
183182
}
184183
]
185184
```
186185

187-
### Supported image formats
186+
TODO: In an [example notebook](https://dev.azure.com/msazuredev/HLS%20AI%20Platform/_git/3dMedImageParseDeployment?path=/notebooks/01.model.packaging.ipynb&version=GBmain&line=314&lineEnd=315&lineStartColumn=1&lineEndColumn=1&lineStyle=plain&_a=contents), `temp_file.flush()` and `os.unlink(temp_file.name)` are commented out. Are these lines needed?
187+
188+
The NIfTI file can be obtained by decoding the returned string using a code like
189+
```python
190+
def decode_base64_to_nifti(base64_string: str) -> nib.Nifti1Image:
191+
"""
192+
Decode a Base64 string back to a NIfTI image.
193+
194+
Args:
195+
base64_string (str): Base64 encoded string of NIfTI image
196+
197+
Returns:
198+
nib.Nifti1Image: Decoded NIfTI image object
199+
"""
200+
base64_string = json.loads(base64_string)["data"]
201+
# Decode Base64 string to bytes
202+
byte_data = base64.b64decode(base64_string)
203+
204+
# Create a temporary file to load the NIfTI image
205+
with tempfile.NamedTemporaryFile(suffix='.nii.gz', delete=False) as temp_file:
206+
temp_file.write(byte_data)
207+
temp_file.flush()
208+
# Load NIfTI image from the temporary file
209+
nifti_image = nib.load(temp_file.name)
210+
211+
# Remove temporary file
212+
os.unlink(temp_file.name)
213+
214+
return nifti_image.get_fdata()
215+
```
188216

189-
The deployed model API supports images encoded in PNG format. For optimal results, we recommend using uncompressed/lossless PNGs with RGB images.
217+
### Supported input formats
190218

191-
As described in the API specification, the model only accepts images in the resolution of `1024x1024`pixels. Images need to be resized and padded (in the case of non-square aspect ratio).
219+
The deployed model API supports volumes encoded in NIfTI format.
192220

221+
<!--
193222
See the [Generating Segmentation for a Variety of Imaging Modalities](https://aka.ms/healthcare-ai-examples-mip-examples) notebook for techniques and sample code useful for submitting images of various sizes stored using various biomedical imaging formats.
194223
195224
## Learn more from samples
196-
MedImageParse is a versatile model that can be applied to a wide range of tasks and imaging modalities. For more examples see the following interactive Python Notebooks:
225+
For more MedImageParse 3D examples see the following interactive Python Notebooks:
197226
198227
#### Getting started
199-
* [Deploying and Using MedImageParse](https://aka.ms/healthcare-ai-examples-mip-deploy): Learn how to deploy the MedImageParse model and integrate it into your workflow.
228+
* [Deploying and Using MedImageParse 3D](https://aka.ms/healthcare-ai-examples-mip-deploy): Learn how to deploy the MedImageParse 3D model and integrate it into your workflow.
200229
201230
#### Advanced inferencing techniques and samples
202-
* [Generating Segmentation for a Variety of Imaging Modalities](https://aka.ms/healthcare-ai-examples-mip-examples): Understand how to use MedImageParse to segment a wide variety of different medical images and learn some prompting techniques.
231+
* [Segmentation examples](https://aka.ms/healthcare-ai-examples-mip-examples): Understand how to use MedImageParse 3D to segment images in DICOM and NIfTI formats. -->
203232

204233
## Related content
205234

0 commit comments

Comments
 (0)