Skip to content

Commit 31d6bc9

Browse files
authored
Merge pull request #301573 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to main to sync with https://github.com/MicrosoftDocs/azure-docs (branch main)
2 parents cab2e48 + 8bb225e commit 31d6bc9

File tree

12 files changed

+16
-16
lines changed

12 files changed

+16
-16
lines changed

articles/azure-netapp-files/network-attached-storage-protocols.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -119,7 +119,7 @@ Some organizations have pure Windows or pure UNIX environments (homogeneous) in
119119
* SMB and [NTFS](/windows-server/storage/file-server/ntfs-overview) file security
120120
* NFS and UNIX file security - mode bits or [NFSv4.x access control lists (ACLs)](https://wiki.linux-nfs.org/wiki/index.php/ACLs)
121121

122-
However, many sites must enable data sets to be accessed from both Windows and UNIX clients (heterogenous). For environments with these requirements, Azure NetApp Files has native dual-protocol NAS support. After the user is authenticated on the network and has both appropriate share or export permissions and the necessary file-level permissions, the user can access the data from UNIX hosts using NFS or from Windows hosts using SMB.
122+
However, many sites must enable data sets to be accessed from both Windows and UNIX clients (heterogeneous). For environments with these requirements, Azure NetApp Files has native dual-protocol NAS support. After the user is authenticated on the network and has both appropriate share or export permissions and the necessary file-level permissions, the user can access the data from UNIX hosts using NFS or from Windows hosts using SMB.
123123

124124
### Reasons for using dual-protocol volumes
125125

articles/azure-netapp-files/performance-virtual-machine-sku.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ Storage performance involves more than the speed of the storage itself. The proc
2525
For the most consistent performance when selecting virtual machines, select from SKUs with a single type of chipset – newer SKUs are preferred over the older models where available. Keep in mind that, aside from using a dedicated host, predicting correctly which type of hardware the E_v3 or D_v3 virtual machines land on is unlikely. When using the E_v3 or D_v3 SKU:
2626

2727
* When a virtual machine is turned off, deallocated, and then turned on again, the virtual machine is likely to change hosts and as such hardware models.
28-
* When applications are deployed across multiple virtual machines, expect the virtual machines to run on heterogenous hardware.
28+
* When applications are deployed across multiple virtual machines, expect the virtual machines to run on heterogeneous hardware.
2929

3030
## Differences within and between SKUs
3131

articles/backup/backup-support-matrix.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ The following table describes the features of Recovery Services vaults:
3535
**Machines in a vault** | Up to 2000 datasources across all workloads (like Azure VMs, SQL Server VM, MABS Servers, and so on) can be protected in a single vault.<br><br>Up to 1,000 Azure VMs in a single vault.<br/><br/> Up to 50 MABS servers can be registered in a single vault.
3636
**Data sources** | Maximum size of an individual [data source](./backup-azure-backup-faq.yml#how-is-the-data-source-size-determined-) is 54,400 GB. This limit doesn't apply to Azure VM backups. No limits apply to the total amount of data you can back up to the vault.
3737
**Backups to vault** | **Azure VMs:** Once a day.<br/><br/>**Machines protected by DPM/MABS:** Twice a day.<br/><br/> **Machines backed up directly by using the MARS agent:** Three times a day.
38-
**Backups between vaults** | Backup is within a region.<br/><br/> You need a vault in every Azure region that contains VMs you want to back up. You can't back up to a different region.
38+
**Backups between vaults** | Backup is within a region and subscription.<br/><br/> You need a vault in every Azure region and subscription that contains VMs you want to back up. You can't back up to a different region. Cross subscription backup (RS vault and protected VMs are in different subscriptions) isn't a supported scenario.
3939
**Move vaults** | You can [move vaults](./backup-azure-move-recovery-services-vault.md) across subscriptions or between resource groups in the same subscription. However, moving vaults across regions isn't supported.
4040
**Move data between vaults** | Moving backed-up data between vaults isn't supported.
4141
**Modify vault storage type** | You can modify the storage replication type (either geo-redundant storage or locally redundant storage) for a vault before backups are stored. After backups begin in the vault, the replication type can't be modified.

articles/backup/install-mars-agent.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -64,11 +64,11 @@ To modify the storage replication type, follow these steps:
6464
> You can't modify the storage replication type after the vault is set up and contains backup items. If you want to do this, you need to re-create the vault.
6565
>
6666
67-
## Configure Recovery Services vault to save passphrase to Recovery Services vault
67+
## Configure Recovery Services vault to save passphrase to Azure Key Vault
6868

69-
Azure Backup using the Recovery Services agent (MARS) allows you to back up file or folder and system state data to Azure Recovery Services vault. This data is encrypted using a passphrase provided during the installation and registration of the MARS agent. This passphrase is required to retrieve and restore the backup data and needs to be saved in a secure external location, such as Azure Key Vault.
69+
Azure Backup using the Recovery Services agent (MARS) allows you to back up file or folder and system state data to Azure Recovery Services vault. This data is encrypted using a passphrase provided during the installation and registration of the MARS agent. This passphrase is required to retrieve and restore the backup data and needs to be saved in a secure external location, such as Key Vault.
7070

71-
We recommend you to create a Key Vault and provide permissions to your Recovery Services vault to save the passphrase to the Key Vault. [Learn more](save-backup-passphrase-securely-in-azure-key-vault.md).
71+
We recommend you create a key vault and provide permissions to your Recovery Services vault to save the passphrase to the key vault. [Learn more](save-backup-passphrase-securely-in-azure-key-vault.md).
7272

7373
### Verify internet access
7474

@@ -152,4 +152,4 @@ To install and register the MARRS agent, follow these steps:
152152
153153
## Next step
154154

155-
Learn how to [Back up Windows machines by using the Azure Backup MARS agent](backup-windows-with-mars-agent.md)
155+
Learn how to [Back up Windows machines by using the Azure Backup MARS agent](backup-windows-with-mars-agent.md)

articles/confidential-computing/confidential-containers-on-aks-preview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ In alignment with the guidelines set by the [Confidential Computing Consortium](
3030
* Code integrity: Runtime enforcement is always available through customer defined policies for containers and container configuration, such as immutable policies and container signing.
3131
* Isolation from operator: Security designs that assume least privilege and highest isolation shielding from all untrusted parties including customer/tenant admins. It includes hardening existing Kubernetes control plane access (kubelet) to confidential pods.
3232

33-
But with these features of confidentiality, the product should additionally its ease of use: it supports all unmodified Linux containers with high Kubernetes feature conformance. Additionally, it supports heterogenous node pools (GPU, general-purpose nodes) in a single cluster to optimize for cost.
33+
With these confidentiality features fulfilled, the product should additionally support all unmodified Linux containers with high Kubernetes feature conformance. Additionally, it supports heterogeneous node pools (GPU, general-purpose nodes) in a single cluster to optimize cost.
3434

3535
## What forms Confidential Containers on AKS?
3636

articles/confidential-computing/confidential-nodes-aks-faq.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ sections:
3030
answer: |
3131
No. [AKS-Engine based confidential computing nodes](https://github.com/Azure/aks-engine/blob/master/docs/topics/sgx.md) support confidential computing nodes that allow custom installations and have full control over your Kubernetes control plane.
3232
33-
- question: Can I run ACC Nodes with other standard AKS SKUs (build a heterogenous node pool cluster)?
33+
- question: Can I run ACC Nodes with other standard AKS SKUs (build a heterogeneous node pool cluster)?
3434
answer: |
3535
Yes, you can run different node pools within the same AKS cluster including ACC nodes. To target your enclave applications on a specific node pool, consider adding node selectors or applying EPC limits. Refer to more details on the quick start on confidential nodes [here](confidential-enclave-nodes-aks-get-started.md).
3636

articles/iot-hub-device-update/understand-device-update.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ You can use Device Update management and deployment controls to maximize product
4747
- An update management experience integrated with Azure IoT Hub.
4848
- Programmatic APIs to enable automation and custom portal experiences.
4949
- Subscription- and role-based access controls available through the Azure portal.
50-
- At-a-glance update compliance and status views across heterogenous device fleets.
50+
- At-a-glance update compliance and status views across heterogeneous device fleets.
5151
- Azure CLI support for creating and managing Device Update resources, groups, and deployments.
5252

5353
### Control over deployment details

articles/iot-hub/iot-hub-device-management-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -100,7 +100,7 @@ Device Update for IoT Hub offers optimized update deployment and streamlined ope
100100
* Update management UX integrated with Azure IoT Hub
101101
* Gradual update rollout through device grouping and update scheduling controls
102102
* Programmatic APIs to enable automation and custom portal experiences
103-
* At-a-glance update compliance and status views across heterogenous device fleets
103+
* At-a-glance update compliance and status views across heterogeneous device fleets
104104
* Support for resilient device updates (A/B) to deliver seamless rollback
105105
* Content caching and disconnected device support, including those devices that are in nested configurations, through built-in Microsoft Connected Cache and integration with Azure IoT Edge
106106
* Subscription and role-based access controls available via the [Azure portal](https://portal.azure.com)

articles/migrate/create-application-assessment.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ ms.date: 11/05/2024
1212

1313
This article shows you how to create an application assessment to migrate or modernize your application workloads using Azure Migrate. Creating an application assessment for your application provides you the multiple migration strategies that you can use to migrate your workloads identify the recommended as well as alternative targets and key insights such as **readiness**, **target right-sizing**, and **cost** to host and run these workloads on Azure month over month. 
1414

15-
You can also create a cross-workload assessment using the following steps. A cross-workload assessment can constitute multiple workloads that do not necessarily combine to form just one application, it can be a group of multiple applications or it can be just a group of heterogenous workloads in your datacenter.
15+
You can also create a cross-workload assessment using the following steps. A cross-workload assessment can constitute multiple workloads that do not necessarily combine to form just one application, it can be a group of multiple applications or it can be just a group of heterogeneous workloads in your datacenter.
1616

1717
In this article, you’ll learn how to: 
1818

articles/sap/workloads/dbms-guide-sqlserver.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@ A VM configuration, which runs SQL Server with an SAP database and where tempdb
7676

7777
The diagram displays a simple case. As eluded to in the article [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](dbms-guide-general.md), Azure storage type, number, and size of disks is dependent from different factors. But in general we recommend:
7878

79-
- For smaller and mid-range deployments, using one large volume, which contains the SQL Server data files. Reason behind this configuration is that it's easier to deal with different I/O workloads in case the SQL Server data files don't have the same free space. Whereas in large deployments, especially deployments where the customer moved with a heterogenous database migration to SQL Server in Azure, we used separate disks and then distributed the data files across those disks. Such an architecture is only successful when each disk has the same number of data files, all the data files are the same size, and roughly have the same free space.
79+
- For smaller and mid-range deployments, using one large volume, which contains the SQL Server data files. Reason behind this configuration is that it's easier to deal with different I/O workloads in case the SQL Server data files don't have the same free space. Whereas in large deployments, especially deployments where the customer moved with a heterogeneous database migration to SQL Server in Azure, we used separate disks and then distributed the data files across those disks. Such an architecture is only successful when each disk has the same number of data files, all the data files are the same size, and roughly have the same free space.
8080
- Use the D:\drive for tempdb as long as performance is good enough. If the overall workload is limited in performance of tempdb located on the D:\ drive, you need to move tempdb to Azure premium storage v1 or v2, or Ultra disk as recommended in [this article](/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist).
8181

8282
SQL Server proportional fill mechanism distributes reads and writes to all datafiles evenly provided all SQL Server data files are the same size and have the same frees pace. SAP on SQL Server delivers the best performance when reads and writes are distributed evenly across all available datafiles. If a database has too few datafiles or the existing data files are highly unbalanced, the best method to correct is an R3load export and import. An R3load export and import involves downtime and should only be done if there's an obvious performance problem that needs to be resolved. If the datafiles are only moderately different sizes, increase all datafiles to the same size, and SQL Server is rebalancing data over time. SQL Server automatically grows datafiles evenly if trace flag 1117 is set or if SQL Server 2016 or higher is used without trace flag.

0 commit comments

Comments
 (0)