Skip to content

Commit 22aadbf

Browse files
committed
review cxreportgen and minor edits to medimageinsight
1 parent 3496eb9 commit 22aadbf

File tree

2 files changed

+33
-25
lines changed

2 files changed

+33
-25
lines changed

articles/ai-studio/how-to/healthcare-ai/deploy-cxrreportgen.md

Lines changed: 27 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ author: msakande
1313
ms.custom: references_regions, generated
1414
---
1515

16-
# How to use CXRReportGen Healthcare AI Model to generate grounded findings
16+
# How to use CXRReportGen Healthcare AI model to generate grounded findings
1717

1818
[!INCLUDE [Feature preview](~/reusable-content/ce-skilling/azure/includes/ai-studio/includes/feature-preview.md)]
1919

@@ -26,13 +26,13 @@ In this article, you learn how to deploy CXRReportGen as an online endpoint for
2626
* Send test data to the model, receive, and interpret results
2727

2828
## CXRReportGen - grounded report generation model for chest X-rays
29-
Radiology reporting demands detailed image understanding, integration of multiple inputs (including comparisons with prior imaging), and precise language generation, making it an ideal candidate for generative multimodal models. CXRReportGen not only performs the task of generating a list of findings from a chest X-ray study, but also extends it by incorporating the localization of individual findings on the image—a task we refer to as grounded report generation.
29+
Radiology reporting demands detailed image understanding, integration of multiple inputs (including comparisons with prior imaging), and precise language generation, making it an ideal candidate for generative multimodal models. CXRReportGen generates a list of findings from a chest X-ray study and also perform a _grounded report generation_ or _grounding_ task. That is, the CXRReportGen model also incorporates the localization of individual findings on the image. Grounding enhances the clarity of image interpretation and the transparency of AI-generated text, which end up improving the utility of automated report drafting.
3030

31-
The following animation demonstrates the conceptual architecture of the CxrReportGen model which consists of an embedding model paired with a general reasoner large language model (LLM).
31+
The following animation demonstrates the conceptual architecture of the CXRReportGen model, which consists of an embedding model paired with a general reasoner large language model (LLM).
3232

33-
:::image type="content" source="../../media/how-to/healthcare-ai/healthcare-reportgen.gif" alt-text="Animation of CxrReportGen architecture and data flow":::
33+
:::image type="content" source="../../media/how-to/healthcare-ai/healthcare-reportgen.gif" alt-text="Animation of CXRReportGen architecture and data flow.":::
3434

35-
Grounding enhances the clarity of image interpretation and the transparency of AI-generated text, thereby improving the utility of automated report drafting. The model combines a radiology-specific image encoder with a large language model and it takes as inputs a more comprehensive set of data than many traditional approaches: the current frontal image, the current lateral image, the prior frontal image, the prior report, and the Indication, Technique, and Comparison sections of the current report. These additions significantly enhance report quality and reduce incorrect information, demonstrating the feasibility of grounded reporting as a novel and richer task in automated radiology.
35+
The CXRReportGen model combines a radiology-specific image encoder with a large language model and takes as inputs a more comprehensive set of data than many traditional approaches. The input data includes the current frontal image, the current lateral image, the prior frontal image, the prior report, and the indication, technique, and comparison sections of the current report. These additions significantly enhance report quality and reduce incorrect information, ultimately demonstrating the feasibility of grounded reporting as a novel and richer task in automated radiology.
3636

3737
## Prerequisites
3838

@@ -44,18 +44,27 @@ To use CXRReportGen model with Azure AI Studio or Azure Machine Learning studio,
4444

4545
CXRReportGen model can be deployed to our self-hosted managed inference solution, which allows you to customize and control all the details about how the model is served.
4646

47-
The model can be deployed through the Model Catalog UI or programmatically. In order to deploy through the UI, navigate to the [model card in the catalog](https://aka.ms/cxrreportgenmodelcard). Programmatic deployment is covered in the sample Jupyter Notebook linked at the end of this page.
47+
You can deploy the model through the model catalog UI or programmatically. To deploy through the UI,
4848

49-
For deployment to a self-hosted managed compute, you must have enough quota in your subscription. If you don't have enough quota available, you can use our temporary quota access by selecting the option **I want to use shared quota and I acknowledge that this endpoint will be deleted in 168 hours.**
49+
- Go to the [model card in the catalog](https://aka.ms/cxrreportgenmodelcard).
50+
- On the model's overview page, select __Deploy__.
51+
- If given the option to choose between serverless API deployment and deployment using a managed compute, select **Managed Compute**.
52+
- Fill out the details in the deployment window.
53+
54+
> [!NOTE]
55+
> For deployment to a self-hosted managed compute, you must have enough quota in your subscription. If you don't have enough quota available, you can use our temporary quota access by selecting the option **I want to use shared quota and I acknowledge that this endpoint will be deleted in 168 hours.**
56+
- Select __Deploy__.
57+
58+
To deploy the model programmatically, see [How to deploy and inference a managed compute deployment with code](../deploy-models-managed.md).
5059

51-
> [!div class="nextstepaction"]
52-
> [Deploy the model to managed compute](../../concepts/deployments-overview.md)
5360

5461
## Work with a grounded report generation model for chest X-ray analysis
5562

56-
### Using REST API to consume the model
63+
In this section, you consume the model and make basic calls to it.
64+
65+
### Use REST API to consume the model
5766

58-
CXRReportGen report generation model can be consumed as a REST API using simple GET requests or by creating a client like so:
67+
Consume the CXRReportGen report generation model as a REST API, using simple GET requests or by creating a client as follows:
5968

6069
```python
6170
from azure.ai.ml import MLClient
@@ -66,11 +75,11 @@ credential = DefaultAzureCredential()
6675
ml_client_workspace = MLClient.from_config(credential)
6776
```
6877

69-
In the deployment configuration you get to choose authentication method. This example uses Azure Machine Learning Token-based authentication, for more authentication options see the [corresponding documentation page](../../../machine-learning/how-to-setup-authentication.md). Also note that client is created from configuration file. This file is created automatically for Azure Machine Learning VMs. Learn more on the [corresponding API documentation page](/python/api/azure-ai-ml/azure.ai.ml.mlclient?view=azure-python#azure-ai-ml-mlclient-from-config).
78+
In the deployment configuration, you get to choose the authentication method. This example uses Azure Machine Learning token-based authentication. For more authentication options, see the [corresponding documentation page](../../../machine-learning/how-to-setup-authentication.md). Also note that the client is created from a configuration file that is created automatically for Azure Machine Learning virtual machines (VMs). Learn more on the [corresponding API documentation page](/python/api/azure-ai-ml/azure.ai.ml.mlclient?view=azure-python#azure-ai-ml-mlclient-from-config).
7079

7180
### Make basic calls to the model
7281

73-
Once the model is deployed, you can use the following code to send data and retrieve list of findings and corresponding bounding boxes.
82+
Once the model is deployed, use the following code to send data and retrieve a list of findings and corresponding bounding boxes.
7483

7584
```python
7685
input_data = {
@@ -120,11 +129,11 @@ The `input_data` object contains the following fields:
120129
| Key | Type | Required/Default | Allowed values | Description |
121130
| ------------- | -------------- | :-----------------:| ----------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
122131
| `columns` | `list[string]` | Y | `"frontal_image"`, `"lateral_image"`, `"prior_image"`,`"indication"`, `"technique"`, `"comparison"`, `"prior_report"` | An object containing the strings mapping data to inputs passed to the model.|
123-
| `index` | `integer` | Y | 0 - 10 | Count of inputs passed to the model. You are limited by how much GPU RAM you have on the VM where CxrReportGen is hosted and by how much data can be passed in a single POST request which will depend on the size of your images, so it's reasonable to keep this number under 10. Check model logs if you're getting errors when passing multiple inputs. |
124-
| `data` | `list[list[string]]` | Y | "" | The list contains the list of items passed to the model. Length of the list is defined by the index parameter. Each item is a list of several strings, order and meaning is defined by the "columns" parameter. The text strings contain text, the image strings are the image bytes encoded using base64 and decoded as utf-8 string |
132+
| `index` | `integer` | Y | 0 - 10 | Count of inputs passed to the model. You're limited by how much GPU RAM you have on the VM where CxrReportGen is hosted, and by how much data can be passed in a single POST requestwhich depends on the size of your images. Therefore, it's reasonable to keep this number under 10. Check model logs if you're getting errors when passing multiple inputs. |
133+
| `data` | `list[list[string]]` | Y | "" | The list contains the list of items passed to the model. The length of the list is defined by the index parameter. Each item is a list of several strings. The order and meaning are defined by the `columns` parameter. The text strings contain text. The image strings are the image bytes encoded using base64 and decoded as utf-8 string |
125134

126135

127-
### Request Example
136+
### Request example
128137

129138
**A simple inference requesting list of findings for a single frontal image with no indication provided**
130139
```JSON
@@ -187,10 +196,9 @@ Response payload is a JSON formatted string containing the following fields:
187196
The deployed model API supports images encoded in PNG or JPEG formats. For optimal results, we recommend using uncompressed/lossless PNGs with 8-bit monochromatic images.
188197

189198
## Learn more from samples
190-
CXRReportGen is a versatile model that can be applied to a wide range of tasks and imaging modalities. For more examples see the following interactive Python Notebooks:
199+
CXRReportGen is a versatile model that can be applied to a wide range of tasks and imaging modalities. For more examples see the following interactive Python notebook:
191200

192-
### Getting started
193-
* [Deploying and Using CXRReportGen](https://aka.ms/healthcare-ai-examples-cxr-deploy): learn how to deploy the CXRReportGen model and integrate it into your workflow. This notebook also covers bounding box parsing and visualization techniques.
201+
* [Deploying and Using CXRReportGen](https://aka.ms/healthcare-ai-examples-cxr-deploy): Learn how to deploy the CXRReportGen model and integrate it into your workflow. This notebook also covers bounding-box parsing and visualization techniques.
194202

195203
## Related content
196204

articles/ai-studio/how-to/healthcare-ai/deploy-medimageinsight.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ MedImageInsight foundational model for health is a powerful model that can proce
2929

3030
An embedding model is capable of serving as the basis of many different solutions—from classification to more complex scenarios like group matching or outlier detection. The following animation shows an embedding model being used for image similarity search and to detect images that are outliers.
3131

32-
:::image type="content" source="../../media/how-to/healthcare-ai/healthcare-embedding-capabilities.gif" alt-text="Animation that shows an embedding model capable of supporting similarity search and quality control scenarios":::
32+
:::image type="content" source="../../media/how-to/healthcare-ai/healthcare-embedding-capabilities.gif" alt-text="Animation that shows an embedding model capable of supporting similarity search and quality control scenarios.":::
3333

3434
## Prerequisites
3535

@@ -72,7 +72,7 @@ credential = DefaultAzureCredential()
7272
ml_client_workspace = MLClient.from_config(credential)
7373
```
7474

75-
In the deployment configuration, you get to choose the authentication method. This example uses Azure Machine Learning Token-based authentication. For more authentication options, see the [corresponding documentation page](../../../machine-learning/how-to-setup-authentication.md). Also note the client is created from a configuration file that is created automatically for Azure Machine Learning virtual machines (VMs). Learn more on the [corresponding API documentation page](/python/api/azure-ai-ml/azure.ai.ml.mlclient?view=azure-python#azure-ai-ml-mlclient-from-config).
75+
In the deployment configuration, you get to choose the authentication method. This example uses Azure Machine Learning token-based authentication. For more authentication options, see the [corresponding documentation page](../../../machine-learning/how-to-setup-authentication.md). Also note that the client is created from a configuration file that is created automatically for Azure Machine Learning virtual machines (VMs). Learn more on the [corresponding API documentation page](/python/api/azure-ai-ml/azure.ai.ml.mlclient?view=azure-python#azure-ai-ml-mlclient-from-config).
7676

7777
### Make basic calls to the model
7878

@@ -137,7 +137,7 @@ The `input_data` object contains the following fields:
137137
| ------------- | -------------- | :-----------------:| ----------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
138138
| `columns` | `list[string]` | Y | `"text"`, `"image"` | An object containing the strings mapping data to inputs passed to the model.|
139139
| `index` | `integer` | Y | 0 - 1024| Count of inputs passed to the model. You're limited by how much data can be passed in a single POST request, which depends on the size of your images. Therefore, you should keep this number in the dozens |
140-
| `data` | `list[list[string]]` | Y | "" | The list contains the items passed to the model which is defined by the index parameter. Each item is a list of two strings. The order is defined by the "columns" parameter. The `text` string contains text to embed. The `image` strings are the image bytes encoded using base64 and decoded as utf-8 string |
140+
| `data` | `list[list[string]]` | Y | "" | The list contains the items passed to the model which is defined by the index parameter. Each item is a list of two strings. The order is defined by the `columns` parameter. The `text` string contains text to embed. The `image` strings are the image bytes encoded using base64 and decoded as utf-8 string |
141141

142142
The `params` object contains the following fields:
143143

@@ -220,15 +220,15 @@ The preferred compression format is lossless PNG, containing either an 8-bit mon
220220
## Learn more from samples
221221
MedImageInsight is a versatile model that can be applied to a wide range of tasks and imaging modalities. For more specific examples of solving various tasks with MedImageInsight, see the following interactive Python notebooks.
222222

223-
### Getting started
223+
#### Getting started
224224
* [Deploying and Using MedImageInsight](https://aka.ms/healthcare-ai-examples-mi2-deploy): Learn how to deploy the MedImageInsight model programmatically and issue an API call to it.
225225

226-
### Classification techniques
226+
#### Classification techniques
227227
* [Building a Zero-Shot Classifier](https://aka.ms/healthcare-ai-examples-mi2-zero-shot): Discover how to use MedImageInsight to create a classifier without the need for training or large amount of labeled ground truth data.
228228

229229
* [Enhancing Classification with Adapter Networks](https://aka.ms/healthcare-ai-examples-mi2-adapter): Improve classification performance by building a small adapter network on top of MedImageInsight.
230230

231-
### Advanced applications
231+
#### Advanced applications
232232
* [Inferring MRI Acquisition Parameters from Pixel Data](https://aka.ms/healthcare-ai-examples-mi2-exam-parameter): Understand how to extract MRI exam acquisition parameters directly from imaging data.
233233

234234
* [Scalable MedImageInsight Endpoint Usage](https://aka.ms/healthcare-ai-examples-mi2-advanced-call): Learn how to generate embeddings of medical images at scale using the MedImageInsight API while handling potential network issues gracefully.

0 commit comments

Comments
 (0)