You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/language-service/concepts/migrate.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,7 +16,7 @@ On November 2nd 2021, Azure AI Language was released into public preview. This l
16
16
17
17
## Do I need to migrate to the language service if I am using Text Analytics?
18
18
19
-
Text Analytics has been incorporated into the language service, and its features are still available. If you were using Text Analytics, your applications should continue to work without breaking changes. You can also see the [Text Analytics migration guide](migrate-language-service-latest.md), if you need to update an older application.
19
+
Text Analytics has been incorporated into the language service, and its features are still available. If you were using Text Analytics features, your applications should continue to work without breaking changes. If you are using Text Analytics API (v2.x or v3), see the [Text Analytics migration guide](migrate-language-service-latest.md) to migrate your applications to the unified Language endpoint and the latest client library.
20
20
21
21
Consider using one of the available quickstart articles to see the latest information on service endpoints, and API calls.
Copy file name to clipboardExpand all lines: articles/aks/image-cleaner.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,14 +1,14 @@
1
1
---
2
2
title: Use Image Cleaner on Azure Kubernetes Service (AKS)
3
-
description: Learn how to use Image Cleaner to clean up stale images on Azure Kubernetes Service (AKS)
3
+
description: Learn how to use Image Cleaner to clean up vulnerable stale images on Azure Kubernetes Service (AKS)
4
4
ms.author: nickoman
5
5
author: nickomang
6
6
ms.topic: article
7
7
ms.custom: devx-track-azurecli
8
8
ms.date: 01/22/2024
9
9
---
10
10
11
-
# Use Image Cleaner to clean up stale images on your Azure Kubernetes Service (AKS) cluster
11
+
# Use Image Cleaner to clean up vulnerable stale images on your Azure Kubernetes Service (AKS) cluster
12
12
13
13
It's common to use pipelines to build and deploy images on Azure Kubernetes Service (AKS) clusters. While great for image creation, this process often doesn't account for the stale images left behind and can lead to image bloat on cluster nodes. These images might contain vulnerabilities, which might create security issues. To remove security risks in your clusters, you can clean these unreferenced images. Manually cleaning images can be time intensive. Image Cleaner performs automatic image identification and removal, which mitigates the risk of stale images and reduces the time required to clean them up.
Copy file name to clipboardExpand all lines: articles/azure-monitor/overview.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -263,7 +263,7 @@ No. Azure Monitor is a scalable cloud service that processes and stores large am
263
263
264
264
You can connect your existing System Center Operations Manager management group to Azure Monitor to collect data from agents into Azure Monitor Logs. This capability allows you to use log queries and solutions to analyze data collected from agents. You can also configure existing System Center Operations Manager agents to send data directly to Azure Monitor. See [Connect Operations Manager to Azure Monitor](agents/om-agents.md).
265
265
266
-
Microsoft also offers System Center Operations Manager Managed Instance (SCOM MI) as an option to migrate a traditional SCOM setup into the cloud with minimal changes. For more information see [About Azure Monitor SCOM Managed Instance][/system-center/scom/operations-manager-managed-instance-overview].
266
+
Microsoft also offers System Center Operations Manager Managed Instance (SCOM MI) as an option to migrate a traditional SCOM setup into the cloud with minimal changes. For more information see [About Azure Monitor SCOM Managed Instance](/system-center/scom/operations-manager-managed-instance-overview).
Copy file name to clipboardExpand all lines: articles/container-registry/container-registry-transfer-images.md
+17-17Lines changed: 17 additions & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,7 +21,7 @@ Please complete the prerequisites outlined [here](./container-registry-transfer-
21
21
- You have a recent version of Az CLI installed in both clouds.
22
22
23
23
> [!IMPORTANT]
24
-
- The ACR Transfer supports artifacts with the layer size limits to 8 GB due to the technical limitations.
24
+
>The ACR Transfer supports artifacts with the layer size limits to 8 GB due to the technical limitations.
25
25
26
26
## Consider using the Az CLI extension
27
27
@@ -31,7 +31,7 @@ For most nonautomated use-cases, we recommend using the Az CLI Extension if poss
31
31
32
32
Create an ExportPipeline resource for your source container registry using Azure Resource Manager template deployment.
33
33
34
-
Copy ExportPipeline Resource Manager [template files](https://github.com/Azure/acr/tree/master/docs/image-transfer/ExportPipelines) to a local folder.
34
+
Copy ExportPipeline Resource Manager [template files](https://github.com/Azure/acr/tree/main/docs/image-transfer/ExportPipelines) to a local folder.
35
35
36
36
Enter the following parameter values in the file `azuredeploy.parameters.json`:
37
37
@@ -96,7 +96,7 @@ EXPORT_RES_ID=$(az deployment group show \
96
96
97
97
Create an ImportPipeline resource in your target container registry using Azure Resource Manager template deployment. By default, the pipeline is enabled to import automatically when the storage account in the target environment has an artifact blob.
98
98
99
-
Copy ImportPipeline Resource Manager [template files](https://github.com/Azure/acr/tree/master/docs/image-transfer/ImportPipelines) to a local folder.
99
+
Copy ImportPipeline Resource Manager [template files](https://github.com/Azure/acr/tree/main/docs/image-transfer/ImportPipelines) to a local folder.
100
100
101
101
Enter the following parameter values in the file `azuredeploy.parameters.json`:
102
102
@@ -161,7 +161,7 @@ IMPORT_RES_ID=$(az deployment group show \
161
161
162
162
Create a PipelineRun resource for your source container registry using Azure Resource Manager template deployment. This resource runs the ExportPipeline resource you created previously, and exports specified artifacts from your container registry as a blob to your source storage account.
163
163
164
-
Copy PipelineRun Resource Manager [template files](https://github.com/Azure/acr/tree/master/docs/image-transfer/PipelineRun/PipelineRun-Export) to a local folder.
164
+
Copy PipelineRun Resource Manager [template files](https://github.com/Azure/acr/tree/main/docs/image-transfer/PipelineRun/PipelineRun-Export) to a local folder.
165
165
166
166
Enter the following parameter values in the file `azuredeploy.parameters.json`:
167
167
@@ -234,7 +234,7 @@ If you didn't enable the `sourceTriggerStatus` parameter of the import pipeline,
234
234
235
235
You can also use a PipelineRun resource to trigger an ImportPipeline for artifact import to your target container registry.
236
236
237
-
Copy PipelineRun Resource Manager [template files](https://github.com/Azure/acr/tree/master/docs/image-transfer/PipelineRun/PipelineRun-Import) to a local folder.
237
+
Copy PipelineRun Resource Manager [template files](https://github.com/Azure/acr/tree/main/docs/image-transfer/PipelineRun/PipelineRun-Import) to a local folder.
238
238
239
239
Enter the following parameter values in the file `azuredeploy.parameters.json`:
240
240
@@ -323,15 +323,15 @@ View [ACR Transfer Troubleshooting](container-registry-transfer-troubleshooting.
Copy file name to clipboardExpand all lines: articles/container-registry/container-registry-transfer-prerequisites.md
+2-3Lines changed: 2 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -80,14 +80,14 @@ Transfer uses shared access signature (SAS) tokens to access the storage account
80
80
81
81
### Generate SAS token for export
82
82
83
-
Run the [az storage account generate-sas][az-storage-account-generate-sas] command to generate a SAS token for the container in the source storage account, used for artifact export.
83
+
Run the [az storage container generate-sas][az-storage-container-generate-sas] command to generate a SAS token for the container in the source storage account, used for artifact export.
In the following example, command output is assigned to the EXPORT_SAS environment variable, prefixed with the '?' character. Update the `--expiry` value for your environment:
>| Azure Database for PostgreSQL - Single server (Microsoft.DBforPostgreSQL/servers) | postgresqlServer | privatelink.postgres.database.azure.com | postgres.database.azure.com |
114
+
>| Azure Database for PostgreSQL - Flexible server (Microsoft.DBforPostgreSQL/flexibleServers) | postgresqlServer | privatelink.postgres.database.azure.com | postgres.database.azure.com |
114
115
>| Azure Database for MySQL - Single Server (Microsoft.DBforMySQL/servers) | mysqlServer | privatelink.mysql.database.azure.com | mysql.database.azure.com |
115
116
>| Azure Database for MySQL - Flexible Server (Microsoft.DBforMySQL/flexibleServers) | mysqlServer | privatelink.mysql.database.azure.com | mysql.database.azure.com |
>| Azure Database for PostgreSQL - Single server (Microsoft.DBforPostgreSQL/servers) | postgresqlServer | privatelink.postgres.database.usgovcloudapi.net | postgres.database.usgovcloudapi.net |
273
+
>| Azure Database for PostgreSQL - Flexible server (Microsoft.DBforPostgreSQL/flexibleServers) | postgresqlServer | privatelink.postgres.database.usgovcloudapi.net | postgres.database.usgovcloudapi.net |
272
274
>| Azure Database for MySQL - Single Server (Microsoft.DBforMySQL/servers) | mysqlServer | privatelink.mysql.database.usgovcloudapi.net | mysql.database.usgovcloudapi.net |
273
275
>| Azure Database for MySQL - Flexible Server (Microsoft.DBforMySQL/flexibleServers) | mysqlServer | privatelink.mysql.database.usgovcloudapi.net | mysql.database.usgovcloudapi.net |
>| Azure Database for PostgreSQL - Single server (Microsoft.DBforPostgreSQL/servers) | postgresqlServer | privatelink.postgres.database.chinacloudapi.cn | postgres.database.chinacloudapi.cn |
405
+
>| Azure Database for PostgreSQL - Flexible server (Microsoft.DBforPostgreSQL/flexibleServers) | postgresqlServer | privatelink.postgres.database.chinacloudapi.cn | postgres.database.chinacloudapi.cn |
403
406
>| Azure Database for MySQL - Single Server (Microsoft.DBforMySQL/servers) | mysqlServer | privatelink.mysql.database.chinacloudapi.cn | mysql.database.chinacloudapi.cn |
404
407
>| Azure Database for MySQL - Flexible Server (Microsoft.DBforMySQL/flexibleServers) | mysqlServer | privatelink.mysql.database.chinacloudapi.cn | mysql.database.chinacloudapi.cn |
To read a blob that is in the archive tier, you must first rehydrate the blob to an online tier (hotor cool) tier. You can rehydrate a blob in one of two ways:
19
+
To read a blob that is in the archive tier, you must first rehydrate the blob to an online (hot, cool, or cold) tier. You can rehydrate a blob in one of two ways:
20
20
21
-
- By copying it to a new blob in the hotor cool tier with the [Copy Blob](/rest/api/storageservices/copy-blob) operation.
22
-
- By changing its tier from archive to hotor cool with the [Set Blob Tier](/rest/api/storageservices/set-blob-tier) operation.
21
+
- By copying it to a new blob in the hot, cool, or cold tier with the [Copy Blob](/rest/api/storageservices/copy-blob) operation.
22
+
- By changing its tier from archive to hot, cool, or cold tier with the [Set Blob Tier](/rest/api/storageservices/set-blob-tier) operation.
23
23
24
24
When you rehydrate a blob, you can specify the priority for the operation to either standard priority or high priority. A standard-priority rehydration operation may take up to 15 hours to complete. A high-priority operation is prioritized over standard-priority requests and may complete in less than one hour for objects under 10 GB in size. You can change the rehydration priority from *Standard* to *High* while the operation is pending.
Copy file name to clipboardExpand all lines: articles/virtual-wan/virtual-wan-faq.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -149,7 +149,7 @@ Virtual WAN supports up to 20-Gbps aggregate throughput both for VPN and Express
149
149
150
150
### How is Virtual WAN different from an Azure virtual network gateway?
151
151
152
-
A virtual network gateway VPN is limited to 30 tunnels. For connections, you should use Virtual WAN for large-scale VPN. You can connect up to 1,000 branch connections per virtual hub with aggregate of 20 Gbps per hub. A connection is an active-active tunnel from the on-premises VPN device to the virtual hub. You can also have multiple virtual hubs per region, which means you can connect more than 1,000 branches to a single Azure Region by deploying multiple Virtual WAN hubs in that Azure Region, each with its own site-to-site VPN gateway.
152
+
A virtual network gateway VPN is limited to 100 tunnels. For connections, you should use Virtual WAN for large-scale VPN. You can connect up to 1,000 branch connections per virtual hub with aggregate of 20 Gbps per hub. A connection is an active-active tunnel from the on-premises VPN device to the virtual hub. You can also have multiple virtual hubs per region, which means you can connect more than 1,000 branches to a single Azure Region by deploying multiple Virtual WAN hubs in that Azure Region, each with its own site-to-site VPN gateway.
153
153
154
154
### <aname="packets"></a>What is the recommended algorithm and Packets per second per site-to-site instance in Virtual WAN hub? How many tunnels is support per instance? What is the max throughput supported in a single tunnel?
0 commit comments