You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/how-to/healthcare-ai/deploy-medimageparse.md
+22-22Lines changed: 22 additions & 22 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ titleSuffix: Azure AI Foundry
4
4
description: Learn how to use MedImageParse and MedImageParse 3D Healthcare AI models with Azure AI Foundry.
5
5
ms.service: azure-ai-foundry
6
6
ms.topic: how-to
7
-
ms.date: 04/24/2025
7
+
ms.date: 08/13/2025
8
8
ms.reviewer: itarapov
9
9
reviewer: ivantarapov
10
10
ms.author: mopeakande
@@ -28,32 +28,32 @@ In this article, you learn how to deploy prompt-based image segmentation models,
28
28
29
29
## MedImageParse
30
30
31
-
Biomedical image analysis is crucial for discovery in fields like cell biology, pathology, and radiology. Traditionally, tasks such as segmentation, detection, and recognition of relevant objects are addressed separately, which can limit the overall effectiveness of image analysis. However, MedImageParse unifies these tasks through image parsing, by jointly conducting segmentation, detection, and recognition across numerous object types and imaging modalities. By applying the interdependencies among these subtasks—such as the semantic labels of segmented objects—the model enhances accuracy and enables novel applications. For example, it allows users to segment all relevant objects in an image, by using a simple text prompt. This approach eliminates the need to manually specify bounding boxes for each object.
31
+
Biomedical image analysis is crucial for discovery in fields like cell biology, pathology, and radiology. Traditionally, tasks such as segmentation, detection, and recognition of relevant objects are addressed separately, which can limit the overall effectiveness of image analysis. However, MedImageParse unifies these tasks through image parsing by jointly conducting segmentation, detection, and recognition across numerous object types and imaging modalities. By applying the interdependencies among these subtasks—such as the semantic labels of segmented objects—the model enhances accuracy and enables novel applications. For example, it allows users to segment all relevant objects in an image by using a simple text prompt. This approach eliminates the need to manually specify bounding boxes for each object.
32
32
33
33
The following image shows the conceptual architecture of the MedImageParse model where an image embedding model is augmented with a task adaptation layer to produce segmentation masks and textual descriptions.
34
34
35
35
:::image type="content" source="../../media/how-to/healthcare-ai/medimageparse-flow.gif" alt-text="Animation of data flow through MedImageParse model showing image coming through the model paired with a task adaptor and turning into a set of segmentation masks.":::
36
36
37
-
Remarkably, the segmentation masks and textual descriptions were achieved by using only standard segmentation datasets, augmented by natural-language labels, or descriptions harmonized with established biomedical object ontologies. This approach not only improved individual task performance but also offered an all-in-one tool for biomedical image analysis, paving the way for more efficient and accurate image-based biomedical discovery.
37
+
Remarkably, the segmentation masks and textual descriptions are achieved by using only standard segmentation datasets, augmented by natural-language labels, or descriptions harmonized with established biomedical object ontologies. This approach not only improves individual task performance but also offers an all-in-one tool for biomedical image analysis, paving the way for more efficient and accurate image-based biomedical discovery.
38
38
39
39
# [MedImageParse 3D](#tab/medimageparse-3d)
40
40
41
41
## MedImageParse 3D
42
-
Similar to the MedImageParse model, MedImageParse 3D uses a combination of a text prompt and a medical image to create a segmentation mask. However, unlike MedImageParse, the MedImageParse 3D model takes in an entire 3D volume—a common way of representing the imaged area for cross-sectional imaging modalities like CT or MRI—and generates the 3-dimensional segmentation mask.
42
+
Similar to the MedImageParse model, MedImageParse 3D uses a combination of a text prompt and a medical image to create a segmentation mask. However, unlike MedImageParse, MedImageParse 3D takes in an entire 3D volume—a common way of representing the imaged area for cross-sectional imaging modalities like CT or MRI—and generates the three-dimensional segmentation mask.
43
43
44
44
---
45
45
46
46
## Prerequisites
47
47
48
-
- An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.
48
+
- An Azure subscription with a valid payment method. Free or trial Azure subscriptions don't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.
49
49
50
50
- If you don't have one, [create a [!INCLUDE [hub](../../includes/hub-project-name.md)]](../create-projects.md?pivots=hub-project).
51
51
52
-
- Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Foundry portal. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the resource group. For more information on permissions, see [Role-based access control in Azure AI Foundry portal](../../concepts/rbac-ai-foundry.md).
52
+
- Azure role-based access controls (Azure RBAC) grant access to operations in Azure AI Foundry portal. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the resource group. For more information on permissions, see [Role-based access control in Azure AI Foundry portal](../../concepts/rbac-ai-foundry.md).
53
53
54
54
## Deploy the model to a managed compute
55
55
56
-
Deployment to a self-hosted managed inference solution allows you to customize and control all the details about how the model is served. You can deploy the model from its model card in the catalog UI of [Azure AI Foundry](https://aka.ms/healthcaremodelstudio) or [Azure Machine Learning studio](https://ml.azure.com/model/catalog) or [deploy it programmatically](../deploy-models-managed.md).
56
+
Deployment to a self-hosted managed inference solution lets you customize and control all the details about how the model is served. You can deploy the model from its model card in the catalog UI of [Azure AI Foundry](https://aka.ms/healthcaremodelstudio) or [Azure Machine Learning studio](https://ml.azure.com/model/catalog) or [deploy it programmatically](../deploy-models-managed.md).
In the deployment configuration, you get to choose an authentication method. This example uses Azure Machine Learning token-based authentication. For more authentication options, see the [corresponding documentation page](../../../machine-learning/how-to-setup-authentication.md). Also, the client is created from a configuration file that is created automatically for Azure Machine Learning virtual machines (VMs). Learn more on the [corresponding API documentation page](/python/api/azure-ai-ml/azure.ai.ml.mlclient#azure-ai-ml-mlclient-from-config).
91
+
In the deployment configuration, you choose an authentication method. This example uses Azure Machine Learning token-based authentication. For more authentication options, see the [corresponding documentation page](../../../machine-learning/how-to-setup-authentication.md). Also, the client is created from a configuration file that's created automatically for Azure Machine Learning virtual machines (VMs). Learn more on the [corresponding API documentation page](/python/api/azure-ai-ml/azure.ai.ml.mlclient#azure-ai-ml-mlclient-from-config).
92
92
93
93
### Make basic calls to the model
94
94
95
-
Once the model is deployed, use the following code to send data and retrieve segmentation masks.
95
+
After you deploy the model, use the following code to send data and retrieve segmentation masks.
96
96
97
97
# [MedImageParse](#tab/medimageparse)
98
98
@@ -206,7 +206,7 @@ MedImageParse and MedImageParse 3D models assume a simple single-turn interactio
206
206
207
207
208
208
209
-
Request payload is a JSONformatted string containing the following parameters:
209
+
The request payload is a JSON-formatted string containing the following parameters:
210
210
211
211
# [MedImageParse](#tab/medimageparse)
212
212
@@ -219,8 +219,8 @@ The `input_data` object contains the following fields:
|`columns`|`list[string]`| Y |`"image"`, `"text"`| An object containing the strings mapping data to inputs passed to the model.|
222
-
|`index`|`integer`| Y | 0 - 256 | Count of inputs passed to the model. You're limited by how much data can be passed in a single POST request, which depends on the size of your images. Therefore, it's reasonable to keep this number in the dozens. |
223
-
|`data`|`list[list[string]]`| Y | "" | The list contains the items passed to the model which is defined by the index parameter. Each item is a list of two strings. The order is defined by the `columns` parameter. The `text` string contains the prompt text. The `image` string is the image bytes encoded using base64 and decoded as utf-8 string. <br/>**NOTE**: The image should be resized to `1024x1024` pixels before submitting to the model, preserving the aspect ratio. Empty space should be padded with black pixels. See the [Generating Segmentation for a Variety of Imaging Modalities](https://aka.ms/healthcare-ai-examples-mip-examples) sample notebook for an example of resizing and padding code.<br/><br/> The input text is a string containing multiple sentences separated by the special character `&`. For example: `tumor core & enhancing tumor & non-enhancing tumor`. In this case, there are three sentences, so the output consists of three images with segmentation masks. |
222
+
|`index`|`integer`| Y | 0 - 256 | Count of inputs passed to the model. You're limited by how much data you can pass in a single POST request, which depends on the size of your images. Therefore, it's reasonable to keep this number in the dozens. |
223
+
|`data`|`list[list[string]]`| Y | "" | The list contains the items you pass to the model, which the `index` parameter defines. Each item is a list of two strings. The order is defined by the `columns` parameter. The `text` string contains the prompt text. The `image` string is the image bytes encoded by using base64 and decoded as a utf-8 string. <br/>**NOTE**: You should resize the image to `1024x1024` pixels before submitting it to the model, preserving the aspect ratio. Empty space should be padded with black pixels. See the [Generating Segmentation for a Variety of Imaging Modalities](https://aka.ms/healthcare-ai-examples-mip-examples) sample notebook for an example of resizing and padding code.<br/><br/> The input text is a string containing multiple sentences separated by the special character `&`. For example: `tumor core & enhancing tumor & non-enhancing tumor`. In this case, there are three sentences, so the output consists of three images with segmentation masks. |
224
224
225
225
# [MedImageParse 3D](#tab/medimageparse-3d)
226
226
@@ -233,8 +233,8 @@ The `input_data` object contains the following fields:
|`columns`|`list[string]`| Yes |`"image"`, `"text"`| An object containing the strings mapping data to inputs passed to the model.|
236
-
|`index`|`integer`| Yes | 0 |This parameter is used when multiple inputs are passed to the endpoint in one call. This model's endpoint wrapper doesn't use this parameter, so it should be set to 0. |
237
-
|`data`|`list[list[string]]`| Yes | Base64 image + text prompt | The list contains the items passed to the model which is defined by the index parameter. Each item is a list of two strings. The order is defined by the `columns` parameter. The `text` string contains the prompt text. The `image` string is the input volume in NIfTI format encoded using base64 and decoded as utf-8 string. The input text is a string containing the target (for example, organ) to be segmented. |
236
+
|`index`|`integer`| Yes | 0 |Use this parameter when you pass multiple inputs to the endpoint in one call. This model's endpoint wrapper doesn't use this parameter, so set it to 0. |
237
+
|`data`|`list[list[string]]`| Yes | Base64 image + text prompt | The list contains the items you pass to the model, defined by the `index` parameter. Each item is a list of two strings. The order is defined by the `columns` parameter. The `text` string contains the prompt text. The `image` string is the input volume in NIfTI format encoded using base64 and decoded as utf-8 string. The input text is a string containing the target (for example, organ) to be segmented. |
238
238
239
239
---
240
240
@@ -286,20 +286,20 @@ The `input_data` object contains the following fields:
286
286
287
287
# [MedImageParse](#tab/medimageparse)
288
288
289
-
Response payload is a list of JSON-formatted strings, each corresponding to a submitted image. Each string contains a `segmentation_object` object.
289
+
The response payload is a list of JSON-formatted strings, each corresponding to a submitted image. Each string contains a `segmentation_object`.
290
290
291
-
`segmentation_object` contains the following fields:
291
+
The `segmentation_object` contains the following fields:
|`image_features`|`segmentation_mask`| An object representing the segmentation masks for a given image |
296
296
|`text_features`|`list[string]`| List of strings, one per each submitted text string, classifying the segmentation masks into one of 16 biomedical segmentation categories each: `liver`, `lung`, `kidney`, `pancreas`, `heart anatomies`, `brain anatomies`, `eye anatomies`, `vessel`, `other organ`, `tumor`, `infection`, `other lesion`, `fluid disturbance`, `other abnormality`, `histology structure`, `other`|
297
297
298
-
`segmentation_mask` contains the following fields:
298
+
The `segmentation_mask` contains the following fields:
|`data`|`string`| A base64-encoded NumPy array containing the one-hot encoded segmentation mask. There could be multiple instances of objects in the returned array. Decode and use `np.frombuffer` to deserialize. The array contains a three-dimensional matrix. The array's size is `1024x1024` (matching the input image dimensions), with the third dimension representing the number of input sentences provided. See the provided [sample notebooks](#learn-more-from-samples) for decoding and usage examples. |
302
+
|`data`|`string`| A base64-encoded NumPy array containing the one-hot encoded segmentation mask. The array can include multiple instances of objects. Use `np.frombuffer` to deserialize after decoding. The array contains a three-dimensional matrix. The array's size is `1024x1024` (matching the input image dimensions), with the third dimension representing the number of input sentences provides. See the provided [sample notebooks](#learn-more-from-samples) for decoding and usage examples. |
303
303
|`shape`|`list[int]`| A list representing the shape of the array (typically `[NUM_PROMPTS, 1024, 1024]`) |
304
304
|`dtype`|`string`| An instance of the [NumPy dtype class](https://numpy.org/doc/stable/reference/arrays.dtypes.html) serialized to a string. Describes the data packing in the data array. |
The deployed model API supports images encoded in PNG format. For optimal results, we recommend using uncompressed/lossless PNGs with RGB images.
420
+
The deployed model API supports images encoded in PNG format. For optimal results, we recommend using uncompressed or lossless PNGs with RGB images.
421
421
422
-
As described in the API specification, the model only accepts images in the resolution of `1024x1024` pixels. Images need to be resized and padded (if they have a non-square aspect ratio).
422
+
As described in the API specification, the model only accepts images in the resolution of `1024x1024` pixels. You need to resize and pad images if they have a non-square aspect ratio.
423
423
424
-
See the [Generating Segmentation for a Variety of Imaging Modalities](https://aka.ms/healthcare-ai-examples-mip-examples) notebook for techniques and sample code useful for submitting images of various sizes stored using various biomedical imaging formats.
424
+
For techniques and sample code useful for submitting images of various sizes stored using various biomedical imaging formats, see the [Generating Segmentation for a Variety of Imaging Modalities](https://aka.ms/healthcare-ai-examples-mip-examples) notebook.
425
425
426
426
# [MedImageParse 3D](#tab/medimageparse-3d)
427
427
@@ -430,7 +430,7 @@ The deployed model API supports volumes encoded in NIfTI format.
430
430
---
431
431
432
432
## Learn more from samples
433
-
MedImageParse is a versatile model that can be applied to a wide range of tasks and imaging modalities. For more examples see the following interactive Python Notebooks:
433
+
MedImageParse is a versatile model that you can apply to a wide range of tasks and imaging modalities. For more examples, see the following interactive Python Notebooks:
434
434
435
435
*[Deploying and Using MedImageParse](https://aka.ms/healthcare-ai-examples-mip-deploy): Learn how to deploy the MedImageParse model and integrate it into your workflow.
436
436
*[Generating Segmentation for a Variety of Imaging Modalities](https://aka.ms/healthcare-ai-examples-mip-examples): Understand how to use MedImageParse to segment a wide variety of different medical images and learn some prompting techniques.
0 commit comments