Skip to content

Commit a0a26d1

Browse files
authored
Merge pull request #105233 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to master to sync with https://github.com/Microsoft/azure-docs (branch master)
2 parents 2f09afd + aab837e commit a0a26d1

File tree

9 files changed

+34
-22
lines changed

9 files changed

+34
-22
lines changed

articles/active-directory/conditional-access/concept-conditional-access-grant.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ Organizations can choose to use the device identity as part of their Conditional
6262

6363
### Require approved client app
6464

65-
Organizations can require that an access attempt to the selected cloud apps needs to be made from an approved client app. These approved client aps support [Intune app protection policies](/intune/app-protection-policy) independent of any mobile-device management (MDM) solution.
65+
Organizations can require that an access attempt to the selected cloud apps needs to be made from an approved client app. These approved client apps support [Intune app protection policies](/intune/app-protection-policy) independent of any mobile-device management (MDM) solution.
6666

6767
This setting applies to the following client apps:
6868

articles/active-directory/hybrid/how-to-connect-password-hash-synchronization.md

Lines changed: 8 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -94,14 +94,15 @@ When *EnforceCloudPasswordPolicyForPasswordSyncedUsers* is disabled (which is th
9494

9595

9696
To enable the EnforceCloudPasswordPolicyForPasswordSyncedUsers feature, run the following command using the MSOnline PowerShell module as shown below. You would have to type yes for the Enable parameter as shown below :
97+
9798
```
98-
`Set-MsolDirSyncFeature -Feature EnforceCloudPasswordPolicyForPasswordSyncedUsers`
99-
`cmdlet Set-MsolDirSyncFeature at command pipeline position 1`
100-
`Supply values for the following parameters:`
101-
`Enable: yes`
102-
`Confirm`
103-
`Continue with this operation?`
104-
`[Y] Yes [N] No [S] Suspend [?] Help (default is "Y"): y`
99+
Set-MsolDirSyncFeature -Feature EnforceCloudPasswordPolicyForPasswordSyncedUsers
100+
cmdlet Set-MsolDirSyncFeature at command pipeline position 1
101+
Supply values for the following parameters:
102+
Enable: yes
103+
Confirm
104+
Continue with this operation?
105+
[Y] Yes [N] No [S] Suspend [?] Help (default is "Y"): y
105106
```
106107

107108
Once enabled, Azure AD does not go to each synchronized user to remove the `DisablePasswordExpiration` value from the PasswordPolicies attribute. Instead, the value is set to `None` during the next password sync for each user when they next change their password in on-premises AD. 

articles/aks/concepts-clusters-workloads.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -102,7 +102,7 @@ To maintain node performance and functionality, resources are reserved on each n
102102
- 6% of the next 112 GB of memory (up to 128 GB)
103103
- 2% of any memory above 128 GB
104104

105-
The above rules for memory and CPU allocation are used to keep agent nodes healthy, some hosting system pods critical to cluster health. These allocation rules also cause the node to report less allocatable memory and CPU than it would if it were not part of a Kubernetes cluster. The above resource reservations can't be changed.
105+
The above rules for memory and CPU allocation are used to keep agent nodes healthy, including some hosting system pods that are critical to cluster health. These allocation rules also cause the node to report less allocatable memory and CPU than it would if it were not part of a Kubernetes cluster. The above resource reservations can't be changed.
106106

107107
For example, if a node offers 7 GB, it will report 34% of memory not allocatable on top of the 750Mi hard eviction threshold.
108108

@@ -220,7 +220,7 @@ There are two Kubernetes resources that let you manage these types of applicatio
220220

221221
### StatefulSets
222222

223-
Modern application development often aims for stateless applications, but *StatefulSets* can be used for stateful applications, such as applications that include database components. A StatefulSet is similar to a deployment in that one or more identical pods are created and managed. Replicas in a StatefulSet follow a graceful, sequential approach to deployment, scale, upgrades, and terminations. With a StatefulSet, the naming convention, network names, and storage persist as replicas are rescheduled.
223+
Modern application development often aims for stateless applications, but *StatefulSets* can be used for stateful applications, such as applications that include database components. A StatefulSet is similar to a deployment in that one or more identical pods are created and managed. Replicas in a StatefulSet follow a graceful, sequential approach to deployment, scale, upgrades, and terminations. With a StatefulSet (as replicas are rescheduled) the naming convention, network names, and storage persist.
224224

225225
You define the application in YAML format using `kind: StatefulSet`, and the StatefulSet Controller then handles the deployment and management of the required replicas. Data is written to persistent storage, provided by Azure Managed Disks or Azure Files. With StatefulSets, the underlying persistent storage remains even when the StatefulSet is deleted.
226226

articles/azure-netapp-files/azure-netapp-files-create-volumes-smb.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,7 @@ A subnet must be delegated to Azure NetApp Files.
6565

6666
* The Azure NetApp Files delegated subnet must be able to reach all Active Directory Domain Services (ADDS) domain controllers in the domain, including all local and remote domain controllers. Otherwise, service interruption can occur.
6767

68-
If you have domain controllers that are unreachable via the Azure NetApp Files delegated subnet, you can submit an Azure support request to alter the scope from **global** (default) to **site**. Azure NetApp Files needs to communicate only with domain controllers in the site where the Azure NetApp Files delegated subnet address space resides.
68+
If you have domain controllers that are unreachable via the Azure NetApp Files delegated subnet, you can specify an Active Directory site during creation of the Active Directory connection. Azure NetApp Files needs to communicate only with domain controllers in the site where the Azure NetApp Files delegated subnet address space resides.
6969

7070
See [Designing the site topology](https://docs.microsoft.com/windows-server/identity/ad-ds/plan/designing-the-site-topology) about AD sites and services.
7171

@@ -83,8 +83,10 @@ See Azure NetApp Files [SMB FAQs](https://docs.microsoft.com/azure/azure-netapp-
8383
This is the DNS that is required for the Active Directory domain join and SMB authentication operations.
8484
* **Secondary DNS**
8585
This is the secondary DNS server for ensuring redundant name services.
86-
* **Domain**
86+
* **AD DNS Domain Name**
8787
This is the domain name of your Active Directory Domain Services that you want to join.
88+
* **AD Site Name**
89+
This is the site name that the Domain Controller discovery will be limited to.
8890
* **SMB server (computer account) prefix**
8991
This is the naming prefix for the machine account in Active Directory that Azure NetApp Files will use for creation of new accounts.
9092

articles/governance/policy/concepts/definition-structure.md

Lines changed: 12 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -692,18 +692,24 @@ use within a policy rule, except the following functions and user-defined functi
692692
The following functions are available to use in a policy rule, but differ from use in an Azure
693693
Resource Manager template:
694694

695-
- addDays(dateTime, numberOfDaysToAdd)
695+
- `addDays(dateTime, numberOfDaysToAdd)`
696696
- **dateTime**: [Required] string - String in the Universal ISO 8601 DateTime format
697697
'yyyy-MM-ddTHH:mm:ss.fffffffZ'
698698
- **numberOfDaysToAdd**: [Required] integer - Number of days to add
699-
- utcNow() - Unlike a Resource Manager template, this can be used outside defaultValue.
699+
- `utcNow()` - Unlike a Resource Manager template, this can be used outside defaultValue.
700700
- Returns a string that is set to the current date and time in Universal ISO 8601 DateTime format
701701
'yyyy-MM-ddTHH:mm:ss.fffffffZ'
702702

703-
Additionally, the `field` function is available to policy rules. `field` is primarily used with
704-
**AuditIfNotExists** and **DeployIfNotExists** to reference fields on the resource that are being
705-
evaluated. An example of this use can be seen in the [DeployIfNotExists
706-
example](effects.md#deployifnotexists-example).
703+
The following functions are only available in policy rules:
704+
705+
- `field(fieldName)`
706+
- **fieldName**: [Required] string - Name of the [field](#fields) to retrieve
707+
- Returns the value of that field from the resource that is being evaluated by the If condition
708+
- `field` is primarily used with **AuditIfNotExists** and **DeployIfNotExists** to reference fields on the resource that are being evaluated. An example of this use can be seen in the [DeployIfNotExists example](effects.md#deployifnotexists-example).
709+
- `requestContext().apiVersion`
710+
- Returns the API version of the request that triggered policy evaluation (example: `2019-09-01`). This will be the API version that was used in the PUT/PATCH request for evaluations on resource creation/update. The latest API version is always used during compliance evaluation on existing resources.
711+
712+
707713

708714
#### Policy function example
709715

articles/security-center/security-center-partner-integration.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -124,13 +124,13 @@ Before you begin, [create an Event Hubs namespace](../event-hubs/event-hubs-crea
124124

125125
#### Stream the Azure Activity Log to Event Hubs
126126

127-
See the following article [stream activity log to Event Hubs](../azure-monitor/platform/activity-logs-stream-event-hubs.md)
127+
See the following article [stream activity log to Event Hubs](../azure-monitor/platform/activity-logs-stream-event-hubs.md).
128128

129129
#### Install a partner SIEM connector
130130

131131
Routing your monitoring data to an Event Hub with Azure Monitor enables you to easily integrate with partner SIEM and monitoring tools.
132132

133-
See the following article for the list of [supported SIEMs](../azure-monitor/platform/resource-logs-stream-event-hubs.md#what-you-can-do-with-platform-logs-sent-to-an-event-hub)
133+
See the following article for the list of [supported SIEMs](../azure-monitor/platform/stream-monitoring-data-event-hubs.md#partner-tools-with-azure-monitor-integration).
134134

135135
### Example for Querying data
136136

articles/service-bus-messaging/service-bus-migrate-standard-premium.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ Some of the points to note:
2828
- The **premium** namespace should have **no entities** in it for the migration to succeed.
2929
- All **entities** in the standard namespace are **copied** to the premium namespace during the migration process.
3030
- Migration supports **1,000 entities per messaging unit** on the premium tier. To identify how many messaging units you need, start with the number of entities that you have on your current standard namespace.
31-
- You can't directly migrate from **basic tier** to **premier tier**, but you can do so indirectly by migrating from basic to standard first and then from the standard to premium in the next step.
31+
- You can't directly migrate from **basic tier** to **premium tier**, but you can do so indirectly by migrating from basic to standard first and then from the standard to premium in the next step.
3232

3333
## Migration steps
3434
Some conditions are associated with the migration process. Familiarize yourself with the following steps to reduce the possibility of errors. These steps outline the migration process, and the step-by-step details are listed in the sections that follow.

articles/stream-analytics/stream-analytics-with-azure-functions.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -188,6 +188,9 @@ Follow the [Real-time fraud detection](stream-analytics-real-time-fraud-detectio
188188

189189
If a failure occurs while sending events to Azure Functions, Stream Analytics retries most operations. All http exceptions are retried until success with the exception of http error 413 (entity too large). An entity too large error is treated as a data error that is subjected to the [retry or drop policy](stream-analytics-output-error-policy.md).
190190

191+
> [!NOTE]
192+
> The timeout for HTTP requests from Stream Analytics to Azure Functions is set to 100 seconds. If your Azure Functions app takes more than 100 seconds to process a batch, Stream Analytics errors out.
193+
191194
## Known issues
192195

193196
In the Azure portal, when you try to reset the Max Batch Size/ Max Batch Count value to empty (default), the value changes back to the previously entered value upon save. Manually enter the default values for these fields in this case.

includes/virtual-machines-linux-lunzero.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ ms.topic: include
55
ms.date: 10/26/2018
66
ms.author: cynthn
77
---
8-
When adding data disks to a Linux VM, you may encounter errors if a disk does not exist at LUN 0. If you are adding a disk manually using the `azure vm disk attach-new` command and you specify a LUN (`--lun`) rather than allowing the Azure platform to determine the appropriate LUN, take care that a disk already exists / will exist at LUN 0.
8+
When adding data disks to a Linux VM, you may encounter errors if a disk does not exist at LUN 0. If you are adding a disk manually using the `az vm disk attach -new` command and you specify a LUN (`--lun`) rather than allowing the Azure platform to determine the appropriate LUN, take care that a disk already exists / will exist at LUN 0.
99

1010
Consider the following example showing a snippet of the output from `lsscsi`:
1111

0 commit comments

Comments
 (0)