Skip to content

Commit 64d8df1

Browse files
authored
Merge pull request #288013 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to main to sync with https://github.com/MicrosoftDocs/azure-docs (branch main)
2 parents 698b757 + d7554ba commit 64d8df1

File tree

4 files changed

+18
-18
lines changed

4 files changed

+18
-18
lines changed

articles/application-gateway/configuration-infrastructure.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -133,7 +133,7 @@ After you configure *active public and private listeners* (with rules) *with the
133133

134134
| Source | Source ports | Destination | Destination ports | Protocol | Access |
135135
|---|---|---|---|---|---|
136-
|`<as per need>`|Any|`<Public and Private<br/>frontend IPs>`|`<listener ports>`|TCP|Allow|
136+
|`<as per need>`|Any|`<Public and Private frontend IPs>`|`<listener ports>`|TCP|Allow|
137137

138138
**Infrastructure ports**: Allow incoming requests from the source as the **GatewayManager** service tag and **Any** destination. The destination port range differs based on SKU and is required for communicating the status of the backend health. These ports are protected/locked down by Azure certificates. External entities can't initiate changes on those endpoints without appropriate certificates in place.
139139

articles/backup/azure-kubernetes-service-backup-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ Azure Backup for AKS currently supports the following two options when doing a r
7474
2. **Patch**: This option allows the patching mutable variable in the backed-up resource on the resource in the target cluster. If you want to update the number of replicas in the target cluster, you can opt for patching as an operation.
7575

7676
>[!Note]
77-
>AKS backup currently doesn't delete and recreate resources in the target cluster if they already exist. If you attempt to restore Persistent Volumess in the original location, delete the existing Persistent Volumes, and then do the restore operation.
77+
>AKS backup currently doesn't delete and recreate resources in the target cluster if they already exist. If you attempt to restore Persistent Volumes in the original location, delete the existing Persistent Volumes, and then do the restore operation.
7878
7979
## Use custom hooks for backup and restore
8080

articles/healthcare-apis/fhir/fhir-best-practices.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ Azure FHIR service supports data ingestion through the import operation, which o
2525

2626
To achieve optimal performance with the import operation, consider the following best practices.
2727

28-
* **Do** use large files while ingesting data. The optimal DNJSON file size for import is 50 MB or larger (or 20,000 resources or more, with no upper limit). Combining smaller files into larger ones can enhance performance.
28+
* **Do** use large files while ingesting data. The optimal NDJSON file size for import is 50 MB or larger (or 20,000 resources or more, with no upper limit). Combining smaller files into larger ones can enhance performance.
2929
* **Consider** using the import operation over HTTP API requests to ingest the data into FHIR service. The import operation provides a high throughput and is a scalable method for loading data.
3030
* **Consider** importing all FHIR resource files in a single import operation for optimal performance. Aim for a total file size of 100 GB or more (or 100 million resources, no upper limit) in one operation. Maximizing an import in this way helps reduce the overhead associated with managing multiple import jobs.
3131
* **Consider** running multiple concurrent imports only if necessary, but limit parallel import jobs. A single large import is designed to consume all available system resources, and processing throughput doesn't increase with concurrent import jobs.

includes/elastic-san-regions.md

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -11,35 +11,35 @@
1111
---
1212
The following list contains the regions Elastic SAN is currently available in, and which regions support both zone-redundant storage (ZRS) and locally redundant storage (LRS), or only LRS:
1313

14-
- South Africa North - LRS
15-
- East Asia - LRS
16-
- Southeast Asia - LRS
14+
- Australia East - LRS
1715
- Brazil South - LRS
1816
- Canada Central - LRS
17+
- Central US - LRS
18+
- East Asia - LRS
19+
- East US - LRS
20+
- East US 2 - LRS
1921
- France Central - LRS & ZRS
2022
- Germany West Central - LRS
21-
- Australia East - LRS
22-
- North Europe - LRS & ZRS
23-
- West Europe - LRS & ZRS
24-
- UK South - LRS
23+
- India Central - LRS
2524
- Japan East - LRS
2625
- Korea Central - LRS
27-
- Central US - LRS
28-
- East US - LRS
26+
- North Europe - LRS & ZRS
27+
- Norway East - LRS
28+
- South Africa North - LRS
2929
- South Central US - LRS
30-
- East US 2 - LRS
31-
- West US 2 - LRS & ZRS
32-
- West US 3 - LRS
30+
- Southeast Asia - LRS
3331
- Sweden Central - LRS
3432
- Switzerland North - LRS
35-
- Norway East - LRS
3633
- UAE North - LRS
37-
- India Central - LRS
34+
- UK South - LRS
35+
- West Europe - LRS & ZRS
36+
- West US 2 - LRS & ZRS
37+
- West US 3 - LRS
3838

3939
Elastic SAN is also available in the following regions, but without Availability Zone support:
4040
- Canada East - LRS
41-
- North Central US - LRS
4241
- Japan West - LRS
42+
- North Central US - LRS
4343

4444
To enable these regions, run the following command to register the necessary feature flag:
4545
```azurepowershell

0 commit comments

Comments
 (0)