Skip to content

Commit 06cc963

Browse files
committed
Initial edits to address Acrolinx warnings
1 parent c5878aa commit 06cc963

File tree

4 files changed

+39
-42
lines changed

4 files changed

+39
-42
lines changed

articles/ai-studio/how-to/healthcare-ai/deploy-cxrreportgen.md

Lines changed: 10 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,6 @@ reviewer: fkriti
1111
ms.author: mopeakande
1212
author: msakande
1313
ms.custom: references_regions, generated
14-
zone_pivot_groups: ?????
1514
---
1615

1716
# How to use CXRReportGen Healthcare AI Model to generate grounded findings
@@ -24,28 +23,28 @@ In this article, you learn how to deploy CXRReportGen as an online endpoint for
2423

2524
* Deploy the model to a self-hosted managed compute.
2625
* Grant permissions to the endpoint.
27-
* Send test data to the model, receive and interpret results
26+
* Send test data to the model, receive, and interpret results
2827

2928
## CXRReportGen - grounded report generation model for chest X-rays
30-
Radiology reporting demands detailed image understanding, integration of multiple inputs (including comparisons with prior imaging), and precise language generation, making it an ideal candidate for generative multimodal models. CXRReportGen not only performs the task of generating a list of findings from a chest Xray study, but also extends it by incorporating the localization of individual findings on the image—a task we refer to as grounded report generation.
29+
Radiology reporting demands detailed image understanding, integration of multiple inputs (including comparisons with prior imaging), and precise language generation, making it an ideal candidate for generative multimodal models. CXRReportGen not only performs the task of generating a list of findings from a chest X-ray study, but also extends it by incorporating the localization of individual findings on the image—a task we refer to as grounded report generation.
3130

32-
The animation below demonstrates the conceptual architecture of the CxrReportGen model which consists of an embedding model paired with a general reasoner LLM.
31+
The following animation demonstrates the conceptual architecture of the CxrReportGen model which consists of an embedding model paired with a general reasoner large language model (LLM).
3332

3433
:::image type="content" source="../../media/how-to/healthcare-ai/healthcare-reportgen.gif" alt-text="Animation of CxrReportGen architecture and data flow":::
3534

36-
Grounding enhances the clarity of image interpretation and the transparency of AI-generated text, thereby improving the utility of automated report drafting. The model combines a radiology-specific image encoder with a large language model and it takes as inputs a more comprehensive set of data than many traditional approaches: the current frontal image, the current lateral image, the prior frontal image, the prior report, and the Indication, Technique, and Comparison sections of the current report. These additions significantly enhance report quality and reduce hallucinations, demonstrating the feasibility of grounded reporting as a novel and richer task in automated radiology.
35+
Grounding enhances the clarity of image interpretation and the transparency of AI-generated text, thereby improving the utility of automated report drafting. The model combines a radiology-specific image encoder with a large language model and it takes as inputs a more comprehensive set of data than many traditional approaches: the current frontal image, the current lateral image, the prior frontal image, the prior report, and the Indication, Technique, and Comparison sections of the current report. These additions significantly enhance report quality and reduce incorrect information, demonstrating the feasibility of grounded reporting as a novel and richer task in automated radiology.
3736

3837
## Prerequisites
3938

40-
To use CXRReportGen model with Azure AI Studio or Azure Machine Learning Studio, you need the following prerequisites:
39+
To use CXRReportGen model with Azure AI Studio or Azure Machine Learning studio, you need the following prerequisites:
4140

4241
### A model deployment
4342

4443
**Deployment to a self-hosted managed compute**
4544

4645
CXRReportGen model can be deployed to our self-hosted managed inference solution, which allows you to customize and control all the details about how the model is served.
4746

48-
The model can be deployed through the Model Catalog UI or programmatically. In order to deploy through the UI navigate to the [model card in the catalog](https://aka.ms/cxrreportgenmodelcard). Programmatic deployment is covered in the sample Jupyter Notebook linked at the end of this page.
47+
The model can be deployed through the Model Catalog UI or programmatically. In order to deploy through the UI, navigate to the [model card in the catalog](https://aka.ms/cxrreportgenmodelcard). Programmatic deployment is covered in the sample Jupyter Notebook linked at the end of this page.
4948

5049
For deployment to a self-hosted managed compute, you must have enough quota in your subscription. If you don't have enough quota available, you can use our temporary quota access by selecting the option **I want to use shared quota and I acknowledge that this endpoint will be deleted in 168 hours.**
5150

@@ -67,7 +66,7 @@ credential = DefaultAzureCredential()
6766
ml_client_workspace = MLClient.from_config(credential)
6867
```
6968

70-
Note that in the deployment configuration you get to choose authentication method. This example uses Azure ML Token-based authentication, for more authentication options see the [corresponding documentation page](../../../machine-learning/how-to-setup-authentication.md). Also note that client is created from configuration file. This file is created automatically for Azure Machine Learning VMs. Learn more on the [corresponding API documentation page](/python/api/azure-ai-ml/azure.ai.ml.mlclient?view=azure-python#azure-ai-ml-mlclient-from-config).
69+
In the deployment configuration you get to choose authentication method. This example uses Azure Machine Learning Token-based authentication, for more authentication options see the [corresponding documentation page](../../../machine-learning/how-to-setup-authentication.md). Also note that client is created from configuration file. This file is created automatically for Azure Machine Learning VMs. Learn more on the [corresponding API documentation page](/python/api/azure-ai-ml/azure.ai.ml.mlclient?view=azure-python#azure-ai-ml-mlclient-from-config).
7170

7271
### Make basic calls to the model
7372

@@ -121,8 +120,8 @@ The `input_data` object contains the following fields:
121120
| Key | Type | Required/Default | Allowed values | Description |
122121
| ------------- | -------------- | :-----------------:| ----------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
123122
| `columns` | `list[string]` | Y | `"frontal_image"`, `"lateral_image"`, `"prior_image"`,`"indication"`, `"technique"`, `"comparison"`, `"prior_report"` | An object containing the strings mapping data to inputs passed to the model.|
124-
| `index` | `integer` | Y | 0 - 10 | Count of inputs passed to the model. Note that you are limited by how much GPU RAM you have on the VM where CxrReportGen is hosted and by how much data can be passed in a single POST request which will depend on the size of your images, so it is reasonable to keep this number under 10. Check model logs if you are getting errors when passing multiple inputs. |
125-
| `data` | `list[list[string]]` | Y | "" | The list contains the list of items passed to the model. Length of the list is defined by the index parameter. Each item is a list of several strings, order and meaning is defined by the "columns" parameter. The text strings contains text, the image strings are the image bytes encoded using base64 and decoded as utf-8 string |
123+
| `index` | `integer` | Y | 0 - 10 | Count of inputs passed to the model. You are limited by how much GPU RAM you have on the VM where CxrReportGen is hosted and by how much data can be passed in a single POST request which will depend on the size of your images, so it's reasonable to keep this number under 10. Check model logs if you're getting errors when passing multiple inputs. |
124+
| `data` | `list[list[string]]` | Y | "" | The list contains the list of items passed to the model. Length of the list is defined by the index parameter. Each item is a list of several strings, order and meaning is defined by the "columns" parameter. The text strings contain text, the image strings are the image bytes encoded using base64 and decoded as utf-8 string |
126125

127126

128127
### Request Example
@@ -185,7 +184,7 @@ Response payload is a JSON formatted string containing the following fields:
185184
```
186185

187186
### Supported image formats
188-
The deployed model API supports images encoded in PNG or JPEG formats. For optimal results we recommend using uncompressed/lossless PNGs with 8-bit monochromatic images.
187+
The deployed model API supports images encoded in PNG or JPEG formats. For optimal results, we recommend using uncompressed/lossless PNGs with 8-bit monochromatic images.
189188

190189
## Learn more from samples
191190
CXRReportGen is a versatile model that can be applied to a wide range of tasks and imaging modalities. For more examples see the following interactive Python Notebooks:

articles/ai-studio/how-to/healthcare-ai/deploy-medimageinsight.md

Lines changed: 13 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -10,10 +10,9 @@ ms.reviewer: itarapov
1010
reviewer: ivantarapov
1111
ms.author: mopeakande
1212
author: msakande
13-
zone_pivot_groups: ?????
1413
---
1514

16-
# How to use MedImageInsight Healthcare AI Model for medical image embedding generation
15+
# How to use MedImageInsight Healthcare AI model for medical image embedding generation
1716

1817
[!INCLUDE [Feature preview](~/reusable-content/ce-skilling/azure/includes/ai-studio/includes/feature-preview.md)]
1918

@@ -23,20 +22,20 @@ In this article, you learn how to deploy MedImageInsight as an online endpoint f
2322

2423
* Deploy the model to a self-hosted managed compute.
2524
* Grant permissions to the endpoint.
26-
* Send test data to the model, receive and interpret results
25+
* Send test data to the model, receive, and interpret results
2726

2827
## MedImageInsight - the medical imaging embedding model
2928
MedImageInsight foundational model for health is a powerful model that can process a wide variety of medical images including X-Ray, CT, MRI, clinical photography, dermoscopy, histopathology, ultrasound, and mammography. Rigorous evaluations demonstrate MedImageInsight's ability to achieve state-of-the-art (SOTA) or human expert level performance across classification, image-image search, and fine-tuning tasks. Specifically, on public datasets, MedImageInsight achieves or exceeds SOTA in chest X-ray disease classification and search, dermatology classification and search, OCT classification and search, 3D medical image retrieval, and near SOTA for histopathology classification and search.
3029

31-
Embedding model is the "swiss army knife" of foundational models since it is capable of serving as the basis of many different solutions - from classification to more complex scenarios like group matching or outlier detection.
30+
Embedding model is the "swiss army knife" of foundational models since it's capable of serving as the basis of many different solutions - from classification to more complex scenarios like group matching or outlier detection.
3231

3332
:::image type="content" source="../../media/how-to/healthcare-ai/healthcare-embedding-capabilities.gif" alt-text="Embedding model capable of supporting similarity search and quality control scenarios":::
3433

35-
Here we will explain how to deploy MedImageInsight using the AI Model Catalog in Azure AI Studio or Azure Machine Learning Studio and provide links to more in-depth tutorials and samples.
34+
Here we'll explain how to deploy MedImageInsight using the AI Model Catalog in Azure AI Studio or Azure Machine Learning studio and provide links to more in-depth tutorials and samples.
3635

3736
## Prerequisites
3837

39-
To use MedImageInsight models with Azure AI Studio or Azure Machine Learning Studio, you need the following prerequisites:
38+
To use MedImageInsight models with Azure AI Studio or Azure Machine Learning studio, you need the following prerequisites:
4039

4140
### A model deployment
4241

@@ -66,7 +65,7 @@ credential = DefaultAzureCredential()
6665
ml_client_workspace = MLClient.from_config(credential)
6766
```
6867

69-
Note that in the deployment configuration you get to choose authentication method. This example uses Azure ML Token-based authentication, for more authentication options see the [corresponding documentation page](../../../machine-learning/how-to-setup-authentication.md). Also note that client is created from configuration file. This file is created automatically for Azure Machine Learning VMs. Learn more on the [corresponding API documentation page](/python/api/azure-ai-ml/azure.ai.ml.mlclient?view=azure-python#azure-ai-ml-mlclient-from-config).
68+
In the deployment configuration you get to choose authentication method. This example uses Azure Machine Learning Token-based authentication, for more authentication options see the [corresponding documentation page](../../../machine-learning/how-to-setup-authentication.md). Also note that client is created from configuration file. This file is created automatically for Azure Machine Learning VMs. Learn more on the [corresponding API documentation page](/python/api/azure-ai-ml/azure.ai.ml.mlclient?view=azure-python#azure-ai-ml-mlclient-from-config).
7069

7170
### Make basic calls to the model
7271

@@ -130,14 +129,14 @@ The `input_data` object contains the following fields:
130129
| Key | Type | Required/Default | Allowed values | Description |
131130
| ------------- | -------------- | :-----------------:| ----------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
132131
| `columns` | `list[string]` | Y | `"text"`, `"image"` | An object containing the strings mapping data to inputs passed to the model.|
133-
| `index` | `integer` | Y | 0 - 1024| Count of inputs passed to the model. Note that you are limited by how much data can be passed in a single POST request which will depend on the size of your images, so it is reasonable to keep this number in the dozens |
132+
| `index` | `integer` | Y | 0 - 1024| Count of inputs passed to the model. You are limited by how much data can be passed in a single POST request which will depend on the size of your images, so it's reasonable to keep this number in the dozens |
134133
| `data` | `list[list[string]]` | Y | "" | The list contains the items passed to the model which is defined by the index parameter. Each item is a list of two strings, order is defined by the "columns" parameter. The `text` string contains text to embed, the `image` string are the image bytes encoded using base64 and decoded as utf-8 string |
135134

136135
The `params` object contains the following fields:
137136

138137
| Key | Type | Required/Default | Allowed values | Description |
139138
| ------------- | -------------- | :-----------------:| ----------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
140-
| `get_scaling_factor` | `boolean` | N<br/>`True` | `"True"` OR `"False"` | Whether the model should return "temperature" scaling factor. This factor is useful when you are planning to compare multiple cosine similarity values in application like classification. It is essential for correct implementation of "zero-shot" type of scenarios. For usage refer to the zero-shot classification example linked in the samples section. |
139+
| `get_scaling_factor` | `boolean` | N<br/>`True` | `"True"` OR `"False"` | Whether the model should return "temperature" scaling factor. This factor is useful when you're planning to compare multiple cosine similarity values in application like classification. It's essential for correct implementation of "zero-shot" type of scenarios. For usage, refer to the zero-shot classification example linked in the samples section. |
141140

142141
### Request Example
143142

@@ -197,22 +196,22 @@ Response payload is a JSON formatted string containing the following fields:
197196
}
198197
```
199198

200-
### Additional implementation considerations
201-
The maximum number of tokens processed in the input string is 77. Anything past 77 tokens would be cut off before passed to the model. The model is using CLIP tokenizer which uses about 3 Latin characters per token.
199+
### Other implementation considerations
200+
The maximum number of tokens processed in the input string is 77. Anything past 77 tokens would be cut off before passed to the model. The model is using CLIP tokenizer which uses about three Latin characters per token.
202201

203202
The submitted text is embedded into the same latent space as the image. This means that strings describing medical images of certain body parts obtained with certain imaging modalities would be embedded close to such images. This also means that when building systems on top of MedImageInsight model you should make sure that all your embedding strings are consistent with one another (word order, punctuation). For best results with base model strings should follow pattern `<image modality> <anatomy> <exam parameters> <condition/pathology>.`, for example: `x-ray chest anteroposterior Atelectasis.`.
204203

205-
If you are fine tuning the model, you can change these parameters to better suit your application needs.
204+
If you're fine tuning the model, you can change these parameters to better suit your application needs.
206205

207206
### Supported image formats
208207
The deployed model API supports images encoded in PNG format.
209208

210-
Upon receiving the images the model does pre-processing which involves compressing and resizing the images to `512x512` pixels.
209+
Upon receiving the images the model does preprocessing which involves compressing and resizing the images to `512x512` pixels.
211210

212211
The preferred format is lossless PNG containing either an 8-bit monochromatic or RGB image. For optimization purposes, you can perform resizing on the client side to reduce network traffic.
213212

214213
## Learn more from samples
215-
MedImageInsight is a versatile model that can be applied to a wide range of tasks and imaging modalities. For more specific examples of solving a variety of tasks with MedImageInsight see the following interactive Python Notebooks.
214+
MedImageInsight is a versatile model that can be applied to a wide range of tasks and imaging modalities. For more specific examples of solving various tasks with MedImageInsight see the following interactive Python Notebooks.
216215

217216
### Getting started
218217
* [Deploying and Using MedImageInsight](https://aka.ms/healthcare-ai-examples-mi2-deploy): learn how to deploy the MedImageInsight model programmatically and issue an API call to it.

0 commit comments

Comments
 (0)