You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/container-apps/gpu-types.md
+18-29Lines changed: 18 additions & 29 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,15 +4,15 @@ description: Learn to how select the most appropriate GPU type for your containe
4
4
services: container-apps
5
5
author: craigshoemaker
6
6
ms.service: azure-container-apps
7
-
ms.topic: how-to
8
-
ms.date: 03/18/2025
7
+
ms.topic: conceptual
8
+
ms.date: 06/02/2025
9
9
ms.author: cshoe
10
10
ai-usage: ai-generated
11
11
---
12
12
13
13
# Comparing GPU types in Azure Container Apps
14
14
15
-
Azure Container Apps supports serverless GPU acceleration (preview), enabling compute-intensive machine learning, and AI workloads in containerized environments. This capability allows you to use GPU hardware without managing the underlying infrastructure, following the serverless model that defines Container Apps.
15
+
Azure Container Apps supports serverless GPU acceleration, enabling compute-intensive machine learning, and AI workloads in containerized environments. This capability allows you to use GPU hardware without managing the underlying infrastructure, following the serverless model that defines Container Apps.
16
16
17
17
This article compares the Nvidia T4 and A100 GPU options available in Azure Container Apps. Understanding the technical differences between these GPU types is important as you optimize your containerized applications for performance, cost-efficiency, and workload requirements.
18
18
@@ -22,25 +22,30 @@ The fundamental differences between T4 and A100 GPU types involve the amount of
22
22
23
23
| GPU type | Description |
24
24
|---|---|
25
-
| T4 | Delivers cost-effective acceleration ideal for inference workloads and mainstream AI applications. The GPU is built on the Turing architecture, which provides sufficient computational power for most production inference scenarios. |
26
-
| A100 | Features performance advantages for demanding workloads that require maximum computational power. The [massive memory capacity](#specs) helps you work with large language models, complex computer vision applications, or scientific simulations that wouldn't fit in the T4's more limited memory. |
25
+
| T4 | Delivers cost-effective acceleration ideal for inference workloads and mainstream AI applications. |
26
+
| A100 | Features performance advantages for demanding workloads that require maximum computational power. The [extended memory capacity](#specs) helps you work with large language models, complex computer vision applications, or scientific simulations that wouldn't fit in the T4's more limited memory. |
27
27
28
28
The following table provides a comparison of the technical specifications between the NVIDIA T4 and NVIDIA A100 GPUs available in Azure Container Apps. These specifications highlight the key hardware differences, performance capabilities, and optimal use cases for each GPU type.
29
29
30
30
<aname="specs"></a>
31
31
32
32
| Specification | NVIDIA T4 | NVIDIA A100 |
33
33
|---------------|-----------|-------------|
34
-
|**Memory**| 16GB VRAM | 40GB or 80GB HBM2/HBM2e |
|**Optimal Model Size**| Small models (<5GB) | Medium to large models (>5GB) |
37
+
|**Optimal Model Size**| Small models (<10GB) | Medium to large models (>10GB) |
42
38
|**Best Use Cases**| Cost-effective inference, mainstream AI applications | Training workloads, large models, complex computer vision, scientific simulations |
Choosing between the T4 and A100 GPUs requires careful consideration of several key factors. The primary workload type should guide the initial decision: for inference-focused workloads, especially with smaller models, the T4 often provides sufficient performance at a more attractive price point. For training-intensive workloads or inference with large models, the A100's superior performance becomes more valuable and often necessary.
43
+
44
+
Model size and complexity represent another critical decision factor. For small models (under 5GB), the T4's 16GB memory is typically adequate. For medium-sized models (5-15GB) consider testing on both GPU types to determine the optimal cost vs. performance for your situation. Large models (over 15GB) often require the A100's expanded memory capacity and bandwidth.
45
+
46
+
Evaluate your performance requirements carefully. For baseline acceleration needs, the T4 provides a good balance of performance and cost. For maximum performance in demanding applications, the A100 delivers superior results especially for large-scale AI and high-performance computing workloads. Latency-sensitive applications benefit from the A100's higher compute capability and memory bandwidth, which reduce processing time.
47
+
48
+
If you begin using a T4 GPU and then later decide to move to an A100, then request a quota capacity adjustment.
44
49
45
50
## Differences between GPU types
46
51
@@ -52,30 +57,14 @@ For inference workloads, choosing between T4 and A100 depends on several factors
52
57
53
58
The T4 provides the most cost-effective inference acceleration, particularly when deploying smaller models. The A100, however, delivers substantially higher inference performance, especially for large models, where it can perform faster than the T4 GPU.
54
59
55
-
When looking to scale, the T4 often provides better cost-performance ratio, while the A100 excels in scenarios requiring maximum performance. The A100 type is specially suited for large models or when using MIG to serve multiple inference workloads simultaneously.
60
+
When looking to scale, the T4 often provides better cost-performance ratio, while the A100 excels in scenarios requiring maximum performance. The A100 type is specially suited for large models.
56
61
57
62
### Training workloads
58
63
59
64
For AI training workloads, the difference between these GPUs becomes even more pronounced. The T4, while capable of handling small model training, faces significant limitations for modern deep learning training.
60
65
61
66
The A100 is overwhelmingly superior for training workloads, delivering up to 20 times better performance for large models compared to the T4. The substantially larger memory capacity (40 GB or 80GB) enables training of larger models without the need for complex model parallelism techniques in many cases. The A100's higher memory bandwidth also significantly accelerates data loading during training, reducing overall training time.
62
67
63
-
### Mixed precision and specialized workloads
64
-
65
-
The capabilities for mixed precision and specialized compute formats differ significantly between these GPUs. The T4 supports FP32 and FP16 precision operations, providing reasonable acceleration for mixed precision workloads. However, its support for specialized formats is limited compared to the A100.
66
-
67
-
The A100 offers comprehensive support for a wide range of precision formats, including TF32, FP32, FP16, BFLOAT16, INT8, and INT4. Since the A100 uses TensorFloat-32 (TF32), this GPU provides the mathematical accuracy of FP32 while delivering higher performance.
68
-
69
-
For workloads that benefit from mixed precision or require specialized formats, the A100 offers significant advantages in terms of both performance and flexibility.
70
-
71
-
## Selecting a GPU type
72
-
73
-
Choosing between the T4 and A100 GPUs requires careful consideration of several key factors. The primary workload type should guide the initial decision: for inference-focused workloads, especially with smaller models, the T4 often provides sufficient performance at a more attractive price point. For training-intensive workloads or inference with large models, the A100's superior performance becomes more valuable and often necessary.
74
-
75
-
Model size and complexity represent another critical decision factor. For small models (under 5GB), the T4's 16GB memory is typically adequate. For medium-sized models (5-15GB) consider testing on both GPU types to determine the optimal cost vs. performance for your situation. Large models (over 15GB) often require the A100's expanded memory capacity and bandwidth.
76
-
77
-
Evaluate your performance requirements carefully. For baseline acceleration needs, the T4 provides a good balance of performance and cost. For maximum performance in demanding applications, the A100 delivers superior results especially for large-scale AI and high-performance computing workloads. Latency-sensitive applications benefit from the A100's higher compute capability and memory bandwidth, which reduce processing time.
78
-
79
68
## Special considerations
80
69
81
70
Keep in mind the following exceptions when you're selecting a GPU type:
Copy file name to clipboardExpand all lines: articles/container-apps/networking.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -274,7 +274,7 @@ You can use NAT Gateway to simplify outbound connectivity for your outbound inte
274
274
275
275
When you configure a NAT Gateway on your subnet, the NAT Gateway provides a static public IP address for your environment. All outbound traffic from your container app is routed through the NAT Gateway's static public IP address.
The public network access setting determines whether your container apps environment is accessible from the public Internet. Whether you can change this setting after creating your environment depends on the environment's virtual IP configuration. The following table shows valid values for public network access, depending on your environment's virtual IP configuration.
Copy file name to clipboardExpand all lines: articles/data-factory/connector-impala.md
+100-1Lines changed: 100 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ author: jianleishen
6
6
ms.subservice: data-movement
7
7
ms.custom: synapse
8
8
ms.topic: conceptual
9
-
ms.date: 10/20/2023
9
+
ms.date: 06/05/2025
10
10
ms.author: jianleishen
11
11
---
12
12
# Copy data from Impala using Azure Data Factory or Synapse Analytics
@@ -15,6 +15,9 @@ ms.author: jianleishen
15
15
16
16
This article outlines how to use Copy Activity in an Azure Data Factory or Synapse Analytics pipeline to copy data from Impala. It builds on the [Copy Activity overview](copy-activity-overview.md) article that presents a general overview of the copy activity.
17
17
18
+
> [!IMPORTANT]
19
+
> The Impala connector version 2.0 (Preview) provides improved native Impala support. If you are using the Impala connector version 1.0 in your solution, please [upgrade your Impala connector](#upgrade-the-impala-connector) before **September 30, 2025**. Refer to this [section](#differences-between-impala-version-20-and-version-10) for details on the difference between version 2.0 (Preview) and version 1.0.
20
+
18
21
## Supported capabilities
19
22
20
23
This Impala connector is supported for the following capabilities:
@@ -67,6 +70,62 @@ The following sections provide details about properties that are used to define
67
70
68
71
## Linked service properties
69
72
73
+
The Impala connector now supports version 2.0 (Preview). Refer to this [section](#upgrade-the-impala-connector) to upgrade your Impala connector version from version 1.0. For the property details, see the corresponding sections.
74
+
75
+
-[Version 2.0 (Preview)](#version-20)
76
+
-[Version 1.0](#version-10)
77
+
78
+
### <aname="version-20"></a> Version 2.0 (Preview)
79
+
80
+
The Impala linked service supports the following properties when apply version 2.0 (Preview):
81
+
82
+
| Property | Description | Required |
83
+
|:--- |:--- |:--- |
84
+
| type | The type property must be set to **Impala**. | Yes |
85
+
| version | The version that you specify. The value is `2.0`. | Yes |
86
+
| host | The IP address or host name of the Impala server (that is, 192.168.222.160). | Yes |
87
+
| port | The TCP port that the Impala server uses to listen for client connections. The default value is 21050. | No |
88
+
| thriftTransportProtocol | The transport protocol to use in the Thrift layer. Allowed values are: **Binary**, **HTTP**. The default value is Binary. | Yes |
89
+
| authenticationType | The authentication type to use. <br/>Allowed values are **Anonymous** and **UsernameAndPassword**. | Yes |
90
+
| username | The user name used to access the Impala server. | No |
91
+
| password | The password that corresponds to the user name when you use UsernameAndPassword. Mark this field as a SecureString to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | No |
92
+
| enableSsl | Specifies whether the connections to the server are encrypted by using TLS. The default value is true. | No |
93
+
| enableServerCertificateValidation | Specify whether to enable server SSL certificate validation when you connect. Always use System Trust Store. The default value is true. | No |
94
+
| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. Learn more from [Prerequisites](#prerequisites) section. If not specified, it uses the default Azure Integration Runtime. |No |
95
+
96
+
**Example:**
97
+
98
+
```json
99
+
{
100
+
"name": "ImpalaLinkedService",
101
+
"properties": {
102
+
"type": "Impala",
103
+
"version": "2.0",
104
+
"typeProperties": {
105
+
"host" : "<host>",
106
+
"port" : "<port>",
107
+
"authenticationType" : "UsernameAndPassword",
108
+
"username" : "<username>",
109
+
"password": {
110
+
"type": "SecureString",
111
+
"value": "<password>"
112
+
},
113
+
"enableSsl": true,
114
+
"thriftTransportProtocol": "Binary",
115
+
"enableServerCertificateValidation": true
116
+
},
117
+
"connectVia": {
118
+
"referenceName": "<name of Integration Runtime>",
119
+
"type": "IntegrationRuntimeReference"
120
+
}
121
+
}
122
+
}
123
+
```
124
+
125
+
### Version 1.0
126
+
127
+
The following properties are supported for Impala linked service when apply version 1.0:
128
+
70
129
The following properties are supported for Impala linked service.
71
130
72
131
| Property | Description | Required |
@@ -184,10 +243,50 @@ To copy data from Impala, set the source type in the copy activity to **ImpalaSo
184
243
]
185
244
```
186
245
246
+
## Data type mapping for Impala
247
+
248
+
When you copy data from and to Impala, the following interim data type mappings are used within the service. To learn about how the copy activity maps the source schema and data type to the sink, see [Schema and data type mappings](copy-activity-schema-and-type-mapping.md).
249
+
250
+
| Impala data type | Interim service data type (for version 2.0 (Preview)) | Interim service data type (for version 1.0) |
251
+
|:--- |:--- |:--- |
252
+
| ARRAY | String | String |
253
+
| BIGINT | Int64 | Int64 |
254
+
| BOOLEAN | Boolean | Boolean |
255
+
| CHAR | String | String |
256
+
| DATE | DateTime | DateTime |
257
+
| DECIMAL | Decimal | Decimal |
258
+
| DOUBLE | Double | Double |
259
+
| FLOAT | Single | Single |
260
+
| INT | Int32 | Int32 |
261
+
| MAP | String | String |
262
+
| SMALLINT | Int16 | Int16 |
263
+
| STRING | String | String |
264
+
| STRUCT | String | String |
265
+
| TIMESTAMP | DateTimeOffset | DateTime |
266
+
| TINYINT | SByte | Int16 |
267
+
| VARCHAR | String | String |
268
+
187
269
## Lookup activity properties
188
270
189
271
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
190
272
273
+
## Upgrade the Impala connector
274
+
275
+
Here are steps that help you upgrade the Impala connector:
276
+
277
+
1. In **Edit linked service** page, select version 2.0 (Preview) and configure the linked service by referring to [Linked service properties version 2.0](#version-20).
278
+
279
+
2. The data type mapping for the Impala linked service version 2.0 (Preview) is different from that for the version 1.0. To learn the latest data type mapping, see [Data type mapping for Impala](#data-type-mapping-for-impala).
280
+
281
+
## <aname="differences-between-impala-version-20-and-version-10"></a> Differences between Impala version 2.0 (Preview) and version 1.0
282
+
283
+
The Impala connector version 2.0 (Preview) offers new functionalities and is compatible with most features of version 1.0. The following table shows the feature differences between version 2.0 (Preview) and version 1.0.
284
+
285
+
| Version 2.0 (Preview) | Version 1.0 |
286
+
|:--- |:--- |
287
+
| SASLUsername authentication type is not supported. | Support SASLUsername authentication type. |
288
+
| The default value of `enableSSL` is true. `trustedCertPath`, `useSystemTrustStore`, `allowHostNameCNMismatch` and `allowSelfSignedServerCert` are not supported.<br><br>`enableServerCertificateValidation` is supported.| The default value of `enableSSL` is false. `trustedCertPath`, `useSystemTrustStore`, `allowHostNameCNMismatch` and `allowSelfSignedServerCert` are supported.<br><br>`enableServerCertificateValidation` is not supported. |
289
+
| The following mappings are used from Impala data types to interim service data type.<br><br>TIMESTAMP -> DateTimeOffset <br>TINYINT -> SByte | The following mappings are used from Impala data types to interim service data type.<br><br>TIMESTAMP -> DateTime <br>TINYINT -> Int16 |
191
290
192
291
## Related content
193
292
For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
Copy file name to clipboardExpand all lines: articles/expressroute/evaluate-circuit-resiliency.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ ms.custom: ai-usage
12
12
13
13
# Evaluate the resiliency of multi-site redundant ExpressRoute circuits
14
14
15
-
The [guided portal experience](expressroute-howto-circuit-portal-resource-manager.md?pivots=expressroute-preview) assists in the configuration of ExpressRoute circuits for maximum resiliency. The subsequent diagram illustrates the logical architecture of an ExpressRoute circuit designed for maximum resiliency."
15
+
The [guided portal experience](expressroute-howto-circuit-portal-resource-manager.md?pivots=expressroute-preview) assists in the configuration of ExpressRoute circuits for maximum resiliency. The subsequent diagram illustrates the logical architecture of an ExpressRoute circuit designed for maximum resiliency.
16
16
17
17
:::image type="content" source=".\media\evaluate-circuit-resiliency\maximum-resiliency.png" alt-text="Diagram of ExpressRoute circuits configured with maximum resiliency.":::
0 commit comments