|
| 1 | +--- |
| 2 | +title: How to deploy and use MedImageParse 3D healthcare AI model with Azure AI Foundry |
| 3 | +titleSuffix: Azure AI Foundry |
| 4 | +description: Learn how to use MedImageParse 3D Healthcare AI Model with Azure AI Foundry. |
| 5 | +ms.service: azure-ai-foundry |
| 6 | +manager: scottpolly |
| 7 | +ms.topic: how-to |
| 8 | +ms.date: 10/20/2024 |
| 9 | +ms.reviewer: itarapov |
| 10 | +reviewer: ivantarapov |
| 11 | +ms.author: mopeakande |
| 12 | +author: msakande |
| 13 | +#Customer intent: As a Data Scientist I want to learn how to use the MedImageParse 3D healthcare AI model to segment medical images. |
| 14 | + |
| 15 | +--- |
| 16 | + |
| 17 | +# How to use MedImageParse healthcare AI model for segmentation of medical images |
| 18 | + |
| 19 | +[!INCLUDE [health-ai-models-meddev-disclaimer](../../includes/health-ai-models-meddev-disclaimer.md)] |
| 20 | + |
| 21 | +In this article, you learn how to deploy MedImageParse as an online endpoint for real-time inference and issue a basic call to the API. The steps you take are: |
| 22 | + |
| 23 | +* Deploy the model to a self-hosted managed compute. |
| 24 | +* Grant permissions to the endpoint. |
| 25 | +* Send test data to the model, receive, and interpret results. |
| 26 | + |
| 27 | + |
| 28 | +## MedImageParse 3D - prompt-based segmentation of medical images |
| 29 | +Similar to our [MedImageParse model](deploy-medimageparse.md) model, MedImageParse 3D uses a combination of a text prompt and medical image to create a segmentation mask. However, unlike MedImageParse, the MedImageParse 3D model takes in an entire 3D volume - which is a common way of representing the imaged area for cross-sectional imaging modalities like CT or MRI - and generates the 3-dimensional segmenatation mask. |
| 30 | + |
| 31 | +## Prerequisites |
| 32 | + |
| 33 | +To use the MedImageParse model, you need the following prerequisites: |
| 34 | + |
| 35 | +### A model deployment |
| 36 | + |
| 37 | +**Deployment to a self-hosted managed compute** |
| 38 | + |
| 39 | +MedImageParse model can be deployed to our self-hosted managed inference solution, which allows you to customize and control all the details about how the model is served. You can deploy the model through the catalog UI (in [Azure AI Foundry](https://aka.ms/healthcaremodelstudio) or [Azure Machine Learning studio](https://ml.azure.com/model/catalog)) or deploy programmatically. |
| 40 | + |
| 41 | +To __deploy the model through the UI__: |
| 42 | + |
| 43 | +1. Go to the catalog. |
| 44 | +1. Search for _MedImageParse_ and select the model card. |
| 45 | +1. On the model's overview page, select __Deploy__. |
| 46 | +1. If given the option to choose between serverless API deployment and deployment using a managed compute, select **Managed Compute**. |
| 47 | +1. Fill out the details in the deployment window. |
| 48 | + |
| 49 | + > [!NOTE] |
| 50 | + > For deployment to a self-hosted managed compute, you must have enough quota in your subscription. If you don't have enough quota available, you can use our temporary quota access by selecting the option **I want to use shared quota and I acknowledge that this endpoint will be deleted in 168 hours.** |
| 51 | +1. Select __Deploy__. |
| 52 | + |
| 53 | +To __deploy the model programmatically__, see [How to deploy and inference a managed compute deployment with code](../deploy-models-managed.md). |
| 54 | + |
| 55 | +## Work with a segmentation model |
| 56 | + |
| 57 | +In this section, you consume the model and make basic calls to it. |
| 58 | + |
| 59 | +### Use REST API to consume the model |
| 60 | + |
| 61 | +Consume the MedImageParse segmentation model as a REST API, using simple GET requests or by creating a client as follows: |
| 62 | + |
| 63 | +```python |
| 64 | +from azure.ai.ml import MLClient |
| 65 | +from azure.identity import DeviceCodeCredential |
| 66 | + |
| 67 | +credential = DefaultAzureCredential() |
| 68 | + |
| 69 | +ml_client_workspace = MLClient.from_config(credential) |
| 70 | +``` |
| 71 | + |
| 72 | +In the deployment configuration, you get to choose authentication method. This example uses Azure Machine Learning token-based authentication. For more authentication options, see the [corresponding documentation page](../../../machine-learning/how-to-setup-authentication.md). Also, note that the client is created from a configuration file that is created automatically for Azure Machine Learning virtual machines (VMs). Learn more on the [corresponding API documentation page](/python/api/azure-ai-ml/azure.ai.ml.mlclient#azure-ai-ml-mlclient-from-config). |
| 73 | + |
| 74 | +### Make basic calls to the model |
| 75 | + |
| 76 | +Once the model is deployed, use the following code to send data and retrieve segmentation masks. |
| 77 | + |
| 78 | +```python |
| 79 | +import base64 |
| 80 | +import json |
| 81 | +import os |
| 82 | + |
| 83 | +sample_image_xray = os.path.join(image_path) |
| 84 | + |
| 85 | +def read_image(image_path): |
| 86 | + with open(image_path, "rb") as f: |
| 87 | + return f.read() |
| 88 | + |
| 89 | +sample_image = "sample_image.png" |
| 90 | +data = { |
| 91 | + "input_data": { |
| 92 | + "columns": [ "image", "text" ], |
| 93 | + "index": [ 0 ], |
| 94 | + "data": [ |
| 95 | + [ |
| 96 | + base64.encodebytes(read_image(sample_image)).decode("utf-8"), |
| 97 | + "neoplastic cells in breast pathology & inflammatory cells" |
| 98 | + ] |
| 99 | + ] |
| 100 | + } |
| 101 | +} |
| 102 | +data_json = json.dumps(data) |
| 103 | + |
| 104 | +# Create request json |
| 105 | +request_file_name = "sample_request_data.json" |
| 106 | +with open(request_file_name, "w") as request_file: |
| 107 | + json.dump(data, request_file) |
| 108 | + |
| 109 | +response = ml_client_workspace.online_endpoints.invoke( |
| 110 | + endpoint_name=endpoint_name, |
| 111 | + deployment_name=deployment_name, |
| 112 | + request_file=request_file_name, |
| 113 | +) |
| 114 | +``` |
| 115 | + |
| 116 | +## Use MedImageParse REST API |
| 117 | +MedImageParse model assumes a simple single-turn interaction where one request produces one response. |
| 118 | + |
| 119 | +### Request schema |
| 120 | + |
| 121 | +Request payload is a JSON formatted string containing the following parameters: |
| 122 | + |
| 123 | +| Key | Type | Required/Default | Description | |
| 124 | +| ------------- | -------------- | :-----------------:| ----------------- | |
| 125 | +| `input_data` | `[object]` | Y | An object containing the input data payload | |
| 126 | + |
| 127 | +The `input_data` object contains the following fields: |
| 128 | + |
| 129 | +| Key | Type | Required/Default | Allowed values | Description | |
| 130 | +| ------------- | -------------- | :-----------------:| ----------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | |
| 131 | +| `columns` | `list[string]` | Y | `"image"`, `"text"` | An object containing the strings mapping data to inputs passed to the model.| |
| 132 | +| `index` | `integer` | Y | 0 - 256 | Count of inputs passed to the model. You're limited by how much data can be passed in a single POST request, which depends on the size of your images. Therefore, it's reasonable to keep this number in the dozens. | |
| 133 | +| `data` | `list[list[string]]` | Y | "" | The list contains the items passed to the model which is defined by the index parameter. Each item is a list of two strings. The order is defined by the `columns` parameter. The `text` string contains the prompt text. The `image` string is the image bytes encoded using base64 and decoded as utf-8 string. <br/>**NOTE**: The image should be resized to `1024x1024` pixels before submitting to the model, preserving the aspect ratio. Empty space should be padded with black pixels. See the [Generating Segmentation for a Variety of Imaging Modalities](https://aka.ms/healthcare-ai-examples-mip-examples) sample notebook for an example of resizing and padding code.<br/><br/> The input text is a string containing multiple sentences separated by the special character `&`. For example: `tumor core & enhancing tumor & non-enhancing tumor`. In this case, there are three sentences, so the output consists of three images with segmentation masks. | |
| 134 | + |
| 135 | +### Request example |
| 136 | + |
| 137 | +**Requesting segmentation of all cells in a pathology image** |
| 138 | +```JSON |
| 139 | +{ |
| 140 | + "input_data": { |
| 141 | + "columns": [ |
| 142 | + "image", |
| 143 | + "text" |
| 144 | + ], |
| 145 | + "index":[0], |
| 146 | + "data": [ |
| 147 | + ["iVBORw0KGgoAAAANSUhEUgAAAAIAAAACCAYAAABytg0kAAAAAXNSR0IArs4c6QAAAARnQU1BAACx\njwv8YQUAAAAJcEhZcwAAFiUAABYlAUlSJPAAAAAbSURBVBhXY/gUoPS/fhfDfwaGJe///9/J8B8A\nVGwJ5VDvPeYAAAAASUVORK5CYII=\n", |
| 148 | + "neoplastic & inflammatory cells "] |
| 149 | + ] |
| 150 | + } |
| 151 | +} |
| 152 | +``` |
| 153 | + |
| 154 | +### Response schema |
| 155 | + |
| 156 | +Response payload is a list of JSON-formatted strings, each corresponding to a submitted image. Each string contains a `segmentation_object` object. |
| 157 | + |
| 158 | +`segmentation_object` contains the following fields: |
| 159 | + |
| 160 | +| Key | Type | Description | |
| 161 | +| ------------- | -------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | |
| 162 | +| `image_features` | `segmentation_mask` | An object representing the segmentation masks for a given image | |
| 163 | +| `text_features` | `list[string]` | List of strings, one per each submitted text string, classifying the segmentation masks into one of 16 biomedical segmentation categories each: `liver`, `lung`, `kidney`, `pancreas`, `heart anatomies`, `brain anatomies`, `eye anatomies`, `vessel`, `other organ`, `tumor`, `infection`, `other lesion`, `fluid disturbance`, `other abnormality`, `histology structure`, `other` | |
| 164 | + |
| 165 | +`segmentation_mask` contains the following fields: |
| 166 | + |
| 167 | +| Key | Type | Description | |
| 168 | +| ------------- | -------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | |
| 169 | +| `data` | `string` | A base64-encoded NumPy array containing the one-hot encoded segmentation mask. There could be multiple instances of objects in the returned array. Decode and use `np.frombuffer` to deserialize. The array contains a three-dimensional matrix. The array's size is `1024x1024` (matching the input image dimensions), with the third dimension representing the number of input sentences provided. See the provided [sample notebooks](#learn-more-from-samples) for decoding and usage examples. | |
| 170 | +| `shape` | `list[int]` | A list representing the shape of the array (typically `[NUM_PROMPTS, 1024, 1024]`) | |
| 171 | +| `dtype` | `string` | An instance of the [NumPy dtype class](https://numpy.org/doc/stable/reference/arrays.dtypes.html) serialized to a string. Describes the data packing in the data array. | |
| 172 | + |
| 173 | +### Response example |
| 174 | +**A simple inference requesting segmentation of two objects** |
| 175 | +```JSON |
| 176 | +[ |
| 177 | + { |
| 178 | + "image_features": "{ |
| 179 | + 'data': '4oCwUE5HDQoa...', |
| 180 | + 'shape': [2, 1024, 1024], |
| 181 | + 'dtype': 'uint8'}", |
| 182 | + "text_features": ['liver', 'pancreas'] |
| 183 | + } |
| 184 | +] |
| 185 | +``` |
| 186 | + |
| 187 | +### Supported image formats |
| 188 | + |
| 189 | +The deployed model API supports images encoded in PNG format. For optimal results, we recommend using uncompressed/lossless PNGs with RGB images. |
| 190 | + |
| 191 | +As described in the API specification, the model only accepts images in the resolution of `1024x1024`pixels. Images need to be resized and padded (in the case of non-square aspect ratio). |
| 192 | + |
| 193 | +See the [Generating Segmentation for a Variety of Imaging Modalities](https://aka.ms/healthcare-ai-examples-mip-examples) notebook for techniques and sample code useful for submitting images of various sizes stored using various biomedical imaging formats. |
| 194 | + |
| 195 | +## Learn more from samples |
| 196 | +MedImageParse is a versatile model that can be applied to a wide range of tasks and imaging modalities. For more examples see the following interactive Python Notebooks: |
| 197 | + |
| 198 | +#### Getting started |
| 199 | +* [Deploying and Using MedImageParse](https://aka.ms/healthcare-ai-examples-mip-deploy): Learn how to deploy the MedImageParse model and integrate it into your workflow. |
| 200 | + |
| 201 | +#### Advanced inferencing techniques and samples |
| 202 | +* [Generating Segmentation for a Variety of Imaging Modalities](https://aka.ms/healthcare-ai-examples-mip-examples): Understand how to use MedImageParse to segment a wide variety of different medical images and learn some prompting techniques. |
| 203 | + |
| 204 | +## Related content |
| 205 | + |
| 206 | +* [CXRReportGen for grounded report generation](deploy-cxrreportgen.md) |
| 207 | +* [MedImageInsight for grounded report generation](deploy-medimageinsight.md) |
0 commit comments