Skip to content

Commit 1f052c4

Browse files
committed
Review of medimageinsight how-to
1 parent 4d800ef commit 1f052c4

File tree

1 file changed

+36
-29
lines changed

1 file changed

+36
-29
lines changed

articles/ai-studio/how-to/healthcare-ai/deploy-medimageinsight.md

Lines changed: 36 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -12,26 +12,24 @@ ms.author: mopeakande
1212
author: msakande
1313
---
1414

15-
# How to use MedImageInsight Healthcare AI model for medical image embedding generation
15+
# How to use MedImageInsight healthcare AI model for medical image embedding generation
1616

1717
[!INCLUDE [Feature preview](~/reusable-content/ce-skilling/azure/includes/ai-studio/includes/feature-preview.md)]
1818

1919
[!INCLUDE [health-ai-models-meddev-disclaimer](../../includes/health-ai-models-meddev-disclaimer.md)]
2020

21-
In this article, you learn how to deploy MedImageInsight as an online endpoint for real-time inference and issue a basic call to the API. The steps you take are:
21+
In this article, you learn how to deploy MedImageInsight from the model catalog as an online endpoint for real-time inference. You also learn to issue a basic call to the API. The steps you take are:
2222

2323
* Deploy the model to a self-hosted managed compute.
2424
* Grant permissions to the endpoint.
2525
* Send test data to the model, receive, and interpret results
2626

2727
## MedImageInsight - the medical imaging embedding model
28-
MedImageInsight foundational model for health is a powerful model that can process a wide variety of medical images including X-Ray, CT, MRI, clinical photography, dermoscopy, histopathology, ultrasound, and mammography. Rigorous evaluations demonstrate MedImageInsight's ability to achieve state-of-the-art (SOTA) or human expert level performance across classification, image-image search, and fine-tuning tasks. Specifically, on public datasets, MedImageInsight achieves or exceeds SOTA in chest X-ray disease classification and search, dermatology classification and search, OCT classification and search, 3D medical image retrieval, and near SOTA for histopathology classification and search.
28+
MedImageInsight foundational model for health is a powerful model that can process a wide variety of medical images. These images include X-Ray, CT, MRI, clinical photography, dermoscopy, histopathology, ultrasound, and mammography images. Rigorous evaluations demonstrate MedImageInsight's ability to achieve state-of-the-art (SOTA) or human expert-level performance across classification, image-to-image search, and fine-tuning tasks. Specifically, on public datasets, MedImageInsight achieves or exceeds SOTA performance in chest X-ray disease classification and search, dermatology classification and search, Optical coherence tomography (OCT) classification and search, and 3D medical image retrieval. The model also achieves near-SOTA performance for histopathology classification and search.
2929

30-
Embedding model is the "swiss army knife" of foundational models since it's capable of serving as the basis of many different solutions - from classification to more complex scenarios like group matching or outlier detection.
30+
An embedding model is capable of serving as the basis of many different solutionsfrom classification to more complex scenarios like group matching or outlier detection. The following animation shows an embedding model being used for image similarity search and to detect images that are outliers.
3131

32-
:::image type="content" source="../../media/how-to/healthcare-ai/healthcare-embedding-capabilities.gif" alt-text="Embedding model capable of supporting similarity search and quality control scenarios":::
33-
34-
Here we'll explain how to deploy MedImageInsight using the AI Model Catalog in Azure AI Studio or Azure Machine Learning studio and provide links to more in-depth tutorials and samples.
32+
:::image type="content" source="../../media/how-to/healthcare-ai/healthcare-embedding-capabilities.gif" alt-text="Animation that shows an embedding model capable of supporting similarity search and quality control scenarios":::
3533

3634
## Prerequisites
3735

@@ -43,18 +41,27 @@ To use MedImageInsight models with Azure AI Studio or Azure Machine Learning stu
4341

4442
MedImageInsight model can be deployed to our self-hosted managed inference solution, which allows you to customize and control all the details about how the model is served.
4543

46-
The model can be deployed through the Model Catalog UI or programmatically. In order to deploy through the UI navigate to the [model card in the catalog](https://aka.ms/mi2modelcard). Programmatic deployment is covered in the sample Jupyter Notebook linked at the end of this page.
44+
You can deploy the model through the model catalog UI or programmatically. To deploy through the UI,
45+
46+
- Go to the [model card in the catalog](https://aka.ms/mi2modelcard).
47+
- On the model's overview page, select __Deploy__.
48+
- If given the option to choose between serverless API deployment and deployment using a managed compute, select **Managed Compute**.
49+
- Fill out the details in the deployment window.
50+
51+
> [!NOTE]
52+
> For deployment to a self-hosted managed compute, you must have enough quota in your subscription. If you don't have enough quota available, you can use our temporary quota access by selecting the option **I want to use shared quota and I acknowledge that this endpoint will be deleted in 168 hours.**
53+
54+
- Select __Deploy__.
4755

48-
For deployment to a self-hosted managed compute, you must have enough quota in your subscription. If you don't have enough quota available, you can use our temporary quota access by selecting the option **I want to use shared quota and I acknowledge that this endpoint will be deleted in 168 hours.**
56+
To deploy the model programmatically, see [How to deploy and inference a managed compute deployment with code](../deploy-models-managed.md).
4957

50-
> [!div class="nextstepaction"]
51-
> [Deploy the model to managed compute](../../concepts/deployments-overview.md)
58+
## Work with an embedding Model
5259

53-
## Work with an Embedding Model
60+
In this section, you consume the model and make basic calls to it.
5461

55-
### Using REST API to consume the model
62+
### Use REST API to consume the model
5663

57-
MedImageInsight embedding model can be consumed as a REST API using simple GET requests or by creating a client like so:
64+
Consume the MedImageInsight embedding model as a REST API, using simple GET requests or by creating a client as follows:
5865

5966
```python
6067
from azure.ai.ml import MLClient
@@ -65,7 +72,7 @@ credential = DefaultAzureCredential()
6572
ml_client_workspace = MLClient.from_config(credential)
6673
```
6774

68-
In the deployment configuration you get to choose authentication method. This example uses Azure Machine Learning Token-based authentication, for more authentication options see the [corresponding documentation page](../../../machine-learning/how-to-setup-authentication.md). Also note that client is created from configuration file. This file is created automatically for Azure Machine Learning VMs. Learn more on the [corresponding API documentation page](/python/api/azure-ai-ml/azure.ai.ml.mlclient?view=azure-python#azure-ai-ml-mlclient-from-config).
75+
In the deployment configuration, you get to choose the authentication method. This example uses Azure Machine Learning Token-based authentication. For more authentication options, see the [corresponding documentation page](../../../machine-learning/how-to-setup-authentication.md). Also note the client is created from a configuration file that is created automatically for Azure Machine Learning virtual machines (VMs). Learn more on the [corresponding API documentation page](/python/api/azure-ai-ml/azure.ai.ml.mlclient?view=azure-python#azure-ai-ml-mlclient-from-config).
6976

7077
### Make basic calls to the model
7178

@@ -129,16 +136,16 @@ The `input_data` object contains the following fields:
129136
| Key | Type | Required/Default | Allowed values | Description |
130137
| ------------- | -------------- | :-----------------:| ----------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
131138
| `columns` | `list[string]` | Y | `"text"`, `"image"` | An object containing the strings mapping data to inputs passed to the model.|
132-
| `index` | `integer` | Y | 0 - 1024| Count of inputs passed to the model. You are limited by how much data can be passed in a single POST request which will depend on the size of your images, so it's reasonable to keep this number in the dozens |
133-
| `data` | `list[list[string]]` | Y | "" | The list contains the items passed to the model which is defined by the index parameter. Each item is a list of two strings, order is defined by the "columns" parameter. The `text` string contains text to embed, the `image` string are the image bytes encoded using base64 and decoded as utf-8 string |
139+
| `index` | `integer` | Y | 0 - 1024| Count of inputs passed to the model. You're limited by how much data can be passed in a single POST request, which depends on the size of your images. Therefore, you should keep this number in the dozens |
140+
| `data` | `list[list[string]]` | Y | "" | The list contains the items passed to the model which is defined by the index parameter. Each item is a list of two strings. The order is defined by the "columns" parameter. The `text` string contains text to embed. The `image` strings are the image bytes encoded using base64 and decoded as utf-8 string |
134141

135142
The `params` object contains the following fields:
136143

137144
| Key | Type | Required/Default | Allowed values | Description |
138145
| ------------- | -------------- | :-----------------:| ----------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
139-
| `get_scaling_factor` | `boolean` | N<br/>`True` | `"True"` OR `"False"` | Whether the model should return "temperature" scaling factor. This factor is useful when you're planning to compare multiple cosine similarity values in application like classification. It's essential for correct implementation of "zero-shot" type of scenarios. For usage, refer to the zero-shot classification example linked in the samples section. |
146+
| `get_scaling_factor` | `boolean` | N<br/>`True` | `"True"` OR `"False"` | Whether the model should return "temperature" scaling factor. This factor is useful when you're planning to compare multiple cosine similarity values in an application like classification. It's essential for correct implementation of "zero-shot" type of scenarios. For usage, refer to the zero-shot classification example linked in the [Classification techniques](#classification-techniques) section. |
140147

141-
### Request Example
148+
### Request example
142149

143150
**A simple inference requesting embedding of a single string**
144151
```JSON
@@ -197,32 +204,32 @@ Response payload is a JSON formatted string containing the following fields:
197204
```
198205

199206
### Other implementation considerations
200-
The maximum number of tokens processed in the input string is 77. Anything past 77 tokens would be cut off before passed to the model. The model is using CLIP tokenizer which uses about three Latin characters per token.
207+
The maximum number of tokens processed in the input string is 77. Anything past 77 tokens would be cut off before being passed to the model. The model is using a Contrastive Language-Image Pre-Training (CLIP) tokenizer which uses about three Latin characters per token.
201208

202-
The submitted text is embedded into the same latent space as the image. This means that strings describing medical images of certain body parts obtained with certain imaging modalities would be embedded close to such images. This also means that when building systems on top of MedImageInsight model you should make sure that all your embedding strings are consistent with one another (word order, punctuation). For best results with base model strings should follow pattern `<image modality> <anatomy> <exam parameters> <condition/pathology>.`, for example: `x-ray chest anteroposterior Atelectasis.`.
209+
The submitted text is embedded into the same latent space as the image. As a result, strings describing medical images of certain body parts obtained with certain imaging modalities are embedded close to such images. Also, when building systems on top of a MedImageInsight model, you should make sure that all your embedding strings are consistent with one another (word order and punctuation). For best results with base model, strings should follow the pattern `<image modality> <anatomy> <exam parameters> <condition/pathology>.`, for example: `x-ray chest anteroposterior Atelectasis.`.
203210

204-
If you're fine tuning the model, you can change these parameters to better suit your application needs.
211+
If you're fine-tuning the model, you can change these parameters to better suit your application needs.
205212

206213
### Supported image formats
207214
The deployed model API supports images encoded in PNG format.
208215

209-
Upon receiving the images the model does preprocessing which involves compressing and resizing the images to `512x512` pixels.
216+
When the model receives the images, it does preprocessing that involves compressing and resizing the images to `512x512` pixels.
210217

211-
The preferred format is lossless PNG containing either an 8-bit monochromatic or RGB image. For optimization purposes, you can perform resizing on the client side to reduce network traffic.
218+
The preferred compression format is lossless PNG, containing either an 8-bit monochromatic or RGB image. For optimization purposes, you can perform resizing on the client side to reduce network traffic.
212219

213220
## Learn more from samples
214-
MedImageInsight is a versatile model that can be applied to a wide range of tasks and imaging modalities. For more specific examples of solving various tasks with MedImageInsight see the following interactive Python Notebooks.
221+
MedImageInsight is a versatile model that can be applied to a wide range of tasks and imaging modalities. For more specific examples of solving various tasks with MedImageInsight, see the following interactive Python notebooks.
215222

216223
### Getting started
217-
* [Deploying and Using MedImageInsight](https://aka.ms/healthcare-ai-examples-mi2-deploy): learn how to deploy the MedImageInsight model programmatically and issue an API call to it.
224+
* [Deploying and Using MedImageInsight](https://aka.ms/healthcare-ai-examples-mi2-deploy): Learn how to deploy the MedImageInsight model programmatically and issue an API call to it.
218225

219226
### Classification techniques
220-
* [Building a Zero-Shot Classifier](https://aka.ms/healthcare-ai-examples-mi2-zero-shot): discover how to use MedImageInsight to create a classifier without the need for training or large amount of labeled ground truth data.
227+
* [Building a Zero-Shot Classifier](https://aka.ms/healthcare-ai-examples-mi2-zero-shot): Discover how to use MedImageInsight to create a classifier without the need for training or large amount of labeled ground truth data.
221228

222-
* [Enhancing Classification with Adapter Networks](https://aka.ms/healthcare-ai-examples-mi2-adapter): improve classification performance by building a small adapter network on top of MedImageInsight.
229+
* [Enhancing Classification with Adapter Networks](https://aka.ms/healthcare-ai-examples-mi2-adapter): Improve classification performance by building a small adapter network on top of MedImageInsight.
223230

224231
### Advanced applications
225-
* [Inferring MRI Acquisition Parameters from Pixel Data](https://aka.ms/healthcare-ai-examples-mi2-exam-parameter): understand how to extract MRI exam acquisition parameters directly from imaging data.
232+
* [Inferring MRI Acquisition Parameters from Pixel Data](https://aka.ms/healthcare-ai-examples-mi2-exam-parameter): Understand how to extract MRI exam acquisition parameters directly from imaging data.
226233

227234
* [Scalable MedImageInsight Endpoint Usage](https://aka.ms/healthcare-ai-examples-mi2-advanced-call): Learn how to generate embeddings of medical images at scale using the MedImageInsight API while handling potential network issues gracefully.
228235

0 commit comments

Comments
 (0)