You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In this article, you learn how to deploy MedImageParse as an online endpoint for real-time inference and issue a basic call to the API. The steps you take are:
21
+
In this article, you learn how to deploy MedImageParse 3D as an online endpoint for real-time inference and issue a basic call to the API. The steps you take are:
22
22
23
23
* Deploy the model to a self-hosted managed compute.
24
24
* Grant permissions to the endpoint.
@@ -30,18 +30,18 @@ Similar to our [MedImageParse model](deploy-medimageparse.md) model, MedImagePar
30
30
31
31
## Prerequisites
32
32
33
-
To use the MedImageParse model, you need the following prerequisites:
33
+
To use the MedImageParse 3D model, you need the following prerequisites:
34
34
35
35
### A model deployment
36
36
37
37
**Deployment to a self-hosted managed compute**
38
38
39
-
MedImageParse model can be deployed to our self-hosted managed inference solution, which allows you to customize and control all the details about how the model is served. You can deploy the model through the catalog UI (in [Azure AI Foundry](https://aka.ms/healthcaremodelstudio) or [Azure Machine Learning studio](https://ml.azure.com/model/catalog)) or deploy programmatically.
39
+
MedImageParse 3D model can be deployed to our self-hosted managed inference solution, which allows you to customize and control all the details about how the model is served. You can deploy the model through the catalog UI (in [Azure AI Foundry](https://aka.ms/healthcaremodelstudio) or [Azure Machine Learning studio](https://ml.azure.com/model/catalog)) or deploy programmatically.
40
40
41
41
To __deploy the model through the UI__:
42
42
43
43
1. Go to the catalog.
44
-
1. Search for _MedImageParse_ and select the model card.
44
+
1. Search for _MedImageParse3D_ and select the model card.
45
45
1. On the model's overview page, select __Deploy__.
46
46
1. If given the option to choose between serverless API deployment and deployment using a managed compute, select **Managed Compute**.
47
47
1. Fill out the details in the deployment window.
@@ -58,7 +58,7 @@ In this section, you consume the model and make basic calls to it.
58
58
59
59
### Use REST API to consume the model
60
60
61
-
Consume the MedImageParse segmentation model as a REST API, using simple GET requests or by creating a client as follows:
61
+
Consume the MedImageParse 3D segmentation model as a REST API, using simple GET requests or by creating a client as follows:
62
62
63
63
```python
64
64
from azure.ai.ml import MLClient
@@ -75,26 +75,25 @@ In the deployment configuration, you get to choose authentication method. This e
75
75
76
76
Once the model is deployed, use the following code to send data and retrieve segmentation masks.
77
77
78
+
TODO: the example here follows MedImageParse (2D) where it uses `ml_client_workspace.online_endpoints.invoke` instead of `urllib.request.urlopen` as in this [notebook](https://dev.azure.com/msazuredev/HLS%20AI%20Platform/_git/3dMedImageParseDeployment?path=/notebooks/03.model.endpoint.api.call.ipynb&version=GBmain&line=192&lineEnd=193&lineStartColumn=1&lineEndColumn=1&lineStyle=plain&_a=contents). Verify the correct call pattern.
|`columns`|`list[string]`| Y |`"image"`, `"text"`| An object containing the strings mapping data to inputs passed to the model.|
132
133
|`index`|`integer`| Y | 0 - 256 | Count of inputs passed to the model. You're limited by how much data can be passed in a single POST request, which depends on the size of your images. Therefore, it's reasonable to keep this number in the dozens. |
133
-
|`data`|`list[list[string]]`| Y | "" | The list contains the items passed to the model which is defined by the index parameter. Each item is a list of two strings. The order is defined by the `columns` parameter. The `text` string contains the prompt text. The `image` string is the image bytes encoded using base64 and decoded as utf-8 string. <br/>**NOTE**: The image should be resized to `1024x1024` pixels before submitting to the model, preserving the aspect ratio. Empty space should be padded with black pixels. See the [Generating Segmentation for a Variety of Imaging Modalities](https://aka.ms/healthcare-ai-examples-mip-examples) sample notebook for an example of resizing and padding code.<br/><br/> The input text is a string containing multiple sentences separated by the special character `&`. For example: `tumor core & enhancing tumor & non-enhancing tumor`. In this case, there are three sentences, so the output consists of three images with segmentation masks. |
134
+
|`data`|`list[list[string]]`| Y | "" | The list contains the items passed to the model which is defined by the index parameter. Each item is a list of two strings. The order is defined by the `columns` parameter. The `text` string contains the prompt text. The `image` string is the input volume in NIfTI format encoded using base64 and decoded as utf-8 string. The input text is a string containing the target (e.g., organ) to be segmented. |
134
135
135
136
### Request example
136
137
137
-
**Requesting segmentation of all cells in a pathology image**
138
+
**Requesting segmentation of all cells in a pathology image**
138
139
```JSON
139
140
{
140
141
"input_data": {
@@ -144,16 +145,16 @@ The `input_data` object contains the following fields:
Response payload is a list of JSON-formatted strings, each corresponding to a submitted image. Each string contains a `segmentation_object` object.
157
+
Response payload is a list of JSON-formatted strings, each corresponding to a submitted volume. Each string contains a `segmentation_object` object.
157
158
158
159
`segmentation_object` contains the following fields:
159
160
@@ -171,35 +172,63 @@ Response payload is a list of JSON-formatted strings, each corresponding to a su
171
172
|`dtype`|`string`| An instance of the [NumPy dtype class](https://numpy.org/doc/stable/reference/arrays.dtypes.html) serialized to a string. Describes the data packing in the data array. |
172
173
173
174
### Response example
174
-
**A simple inference requesting segmentation of two objects**
175
+
The requested segmentation mask is stored in NIfTI, represented by an encoded string.
176
+
177
+
TODO: verify the value of nifti_file is a string or a json object (without the quote).
175
178
```JSON
176
179
[
177
180
{
178
-
"image_features": "{
179
-
'data': '4oCwUE5HDQoa...',
180
-
'shape': [2, 1024, 1024],
181
-
'dtype': 'uint8'}",
182
-
"text_features": ['liver', 'pancreas']
181
+
"nifti_file": "{'data': 'H4sIAAAAAAAE...'}"
183
182
}
184
183
]
185
184
```
186
185
187
-
### Supported image formats
186
+
TODO: In an [example notebook](https://dev.azure.com/msazuredev/HLS%20AI%20Platform/_git/3dMedImageParseDeployment?path=/notebooks/01.model.packaging.ipynb&version=GBmain&line=314&lineEnd=315&lineStartColumn=1&lineEndColumn=1&lineStyle=plain&_a=contents), `temp_file.flush()` and `os.unlink(temp_file.name)` are commented out. Are these lines needed?
187
+
188
+
The NIfTI file can be obtained by decoding the returned string using a code like
base64_string (str): Base64 encoded string of NIfTI image
196
+
197
+
Returns:
198
+
nib.Nifti1Image: Decoded NIfTI image object
199
+
"""
200
+
base64_string = json.loads(base64_string)["data"]
201
+
# Decode Base64 string to bytes
202
+
byte_data = base64.b64decode(base64_string)
203
+
204
+
# Create a temporary file to load the NIfTI image
205
+
with tempfile.NamedTemporaryFile(suffix='.nii.gz', delete=False) as temp_file:
206
+
temp_file.write(byte_data)
207
+
temp_file.flush()
208
+
# Load NIfTI image from the temporary file
209
+
nifti_image = nib.load(temp_file.name)
210
+
211
+
# Remove temporary file
212
+
os.unlink(temp_file.name)
213
+
214
+
return nifti_image.get_fdata()
215
+
```
188
216
189
-
The deployed model API supports images encoded in PNG format. For optimal results, we recommend using uncompressed/lossless PNGs with RGB images.
217
+
### Supported input formats
190
218
191
-
As described in the API specification, the model only accepts images in the resolution of `1024x1024`pixels. Images need to be resized and padded (in the case of non-square aspect ratio).
219
+
The deployed model API supports volumes encoded in NIfTI format.
192
220
221
+
<!--
193
222
See the [Generating Segmentation for a Variety of Imaging Modalities](https://aka.ms/healthcare-ai-examples-mip-examples) notebook for techniques and sample code useful for submitting images of various sizes stored using various biomedical imaging formats.
194
223
195
224
## Learn more from samples
196
-
MedImageParse is a versatile model that can be applied to a wide range of tasks and imaging modalities. For more examples see the following interactive Python Notebooks:
225
+
For more MedImageParse 3D examples see the following interactive Python Notebooks:
197
226
198
227
#### Getting started
199
-
*[Deploying and Using MedImageParse](https://aka.ms/healthcare-ai-examples-mip-deploy): Learn how to deploy the MedImageParse model and integrate it into your workflow.
228
+
* [Deploying and Using MedImageParse 3D](https://aka.ms/healthcare-ai-examples-mip-deploy): Learn how to deploy the MedImageParse 3D model and integrate it into your workflow.
200
229
201
230
#### Advanced inferencing techniques and samples
202
-
*[Generating Segmentation for a Variety of Imaging Modalities](https://aka.ms/healthcare-ai-examples-mip-examples): Understand how to use MedImageParse to segment a wide variety of different medical images and learn some prompting techniques.
231
+
* [Segmentation examples](https://aka.ms/healthcare-ai-examples-mip-examples): Understand how to use MedImageParse 3D to segment images in DICOM and NIfTI formats. -->
0 commit comments