Skip to content

Commit bc88d30

Browse files
authored
Merge pull request #89696 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to master to sync with https://github.com/Microsoft/azure-docs (branch master)
2 parents 2759a04 + 8be08ae commit bc88d30

File tree

6 files changed

+11
-10
lines changed

6 files changed

+11
-10
lines changed

articles/active-directory/hybrid/reference-connect-version-history.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@ Not all releases of Azure AD Connect will be made available for auto upgrade. Th
5858
- Customers should be informed that the deprecated WMI endpoints for MIIS_Service have now been removed. Any WMI operations should now be done via PS cmdlets.
5959
- Security improvement by resetting constrained delegation on AZUREADSSOACC object
6060
- When adding/editing a sync rule, if there are any attributes used in the rule that are in the connector schema but not added to the connector, the attributes automatically added to the connector. The same is true for the object type the rule affects. If anything is added to the connector, the connector will be marked for full import on the next sync cycle.
61-
- Using an Enterprise or Domain admin as the connector account is no longer supported.
61+
- Using an Enterprise or Domain admin as the connector account is no longer supported in new AAD Connect Deployments. Current AAD Connect deployments using an Enterprise or Domain admin as the connector account will not be affected by this release.
6262
- In the Synchronization Manager a full sync is run on rule creation/edit/deletion. A popup will appear on any rule change notifying the user if full import or full sync is going to be run.
6363
- Added mitigation steps for password errors to 'connectors > properties > connectivity' page
6464
- Added a deprecation warning for the sync service manager on the connector properties page. This warning notifies the user that changes should be made through the AADC wizard.

articles/active-directory/hybrid/tshoot-connect-sync-errors.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -236,9 +236,10 @@ Azure AD Connect is not allowed to soft match a user object from on-premises AD
236236
### How to fix
237237
To resolve this issue do one of the following:
238238

239-
240-
- change the UserPrincipalName to a value that does not match that of an Admin user in Azure AD - which will create a new user in Azure AD with the matching UserPrincipalName
241-
- remove the administrative role from the Admin user in Azure AD, which will enable the soft match between the on-premises user object and the existing Azure AD user object.
239+
- Remove the Azure AD account (owner) from all admin roles.
240+
- **Hard Delete** the Quarantined object in the cloud.
241+
- The next sync cycle will take care of soft-matching the on-premise user to the cloud account (since the cloud user is now no longer a global GA).
242+
- Restore the role memberships for the owner.
242243

243244
>[!NOTE]
244245
>You can assign the administrative role to the existing user object again after the soft match between the on-premises user object and the Azure AD user object has completed.

articles/azure-cache-for-redis/cache-how-to-monitor.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -101,7 +101,7 @@ Each metric includes two versions. One metric measures performance for the entir
101101
| Cache Write |The amount of data written to the cache in Megabytes per second (MB/s) during the specified reporting interval. This value is derived from the network interface cards that support the virtual machine that hosts the cache and is not Redis specific. This value corresponds to the network bandwidth of data sent to the cache from the client. |
102102
| Connected Clients |The number of client connections to the cache during the specified reporting interval. This maps to `connected_clients` from the Redis INFO command. Once the [connection limit](cache-configure.md#default-redis-server-configuration) is reached subsequent connection attempts to the cache will fail. Note that even if there are no active client applications, there may still be a few instances of connected clients due to internal processes and connections. |
103103
| CPU |The CPU utilization of the Azure Cache for Redis server as a percentage during the specified reporting interval. This value maps to the operating system `\Processor(_Total)\% Processor Time` performance counter. |
104-
| Errors | Specific failures and performance issues that the cache could be experiencing during a specified reporting interval. This metric has eight dimensions representing different error types, but could have more added in the future. The error types represented now are as follows: <br/><ul><li>**Failover** – when a cache fails over (subordinate promotes to master)</li><li>**Crash** – when the cache crashes unexpectedly on either of the nodes</li><li>**Dataloss** – when there is dataloss on the cache</li><li>**UnresponsiveClients** – when the clients are not reading data from the server fast enough</li><li>**AOF** – when there is an issue related to AOF persistence</li><li>**RDB** – when there is an issue related to RDB persistence</li><li>**Import** – when there is an issue related to Import RDB</li><li>**Export** – when there is an issue related to Export RDB</li></ul> |
104+
| Errors | Specific failures and performance issues that the cache could be experiencing during a specified reporting interval. This metric has eight dimensions representing different error types, but could have more added in the future. The error types represented now are as follows: <br/><ul><li>**Failover** – when a cache fails over (subordinate promotes to master)</li><li>**Dataloss** – when there is dataloss on the cache</li><li>**UnresponsiveClients** – when the clients are not reading data from the server fast enough</li><li>**AOF** – when there is an issue related to AOF persistence</li><li>**RDB** – when there is an issue related to RDB persistence</li><li>**Import** – when there is an issue related to Import RDB</li><li>**Export** – when there is an issue related to Export RDB</li></ul> |
105105
| Evicted Keys |The number of items evicted from the cache during the specified reporting interval due to the `maxmemory` limit. This maps to `evicted_keys` from the Redis INFO command. |
106106
| Expired Keys |The number of items expired from the cache during the specified reporting interval. This value maps to `expired_keys` from the Redis INFO command.|
107107
| Gets |The number of get operations from the cache during the specified reporting interval. This value is the sum of the following values from the Redis INFO all command: `cmdstat_get`, `cmdstat_hget`, `cmdstat_hgetall`, `cmdstat_hmget`, `cmdstat_mget`, `cmdstat_getbit`, and `cmdstat_getrange`, and is equivalent to the sum of cache hits and misses during the reporting interval. |

articles/azure-monitor/platform/agent-linux-troubleshoot.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ If none of these steps work for you, the following support channels are also ava
4646

4747
>[!NOTE]
4848
>Editing configuration files for performance counters and Syslog is overwritten if the collection is configured from the [data menu Log Analytics Advanced Settings](../../azure-monitor/platform/agent-data-sources.md#configuring-data-sources) in the Azure portal for your workspace. To disable configuration for all agents, disable collection from Log Analytics **Advanced Settings** or for a single agent run the following:
49-
> `sudo su omsagent -c /opt/microsoft/omsconfig/Scripts/OMS_MetaConfigHelper.py --disable`
49+
> `sudo su omsagent -c 'python /opt/microsoft/omsconfig/Scripts/OMS_MetaConfigHelper.py --disable'`
5050
5151
## Installation error codes
5252

articles/cognitive-services/Face/Face-API-How-to-Topics/HowtoIdentifyFacesinImage.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,7 @@ CreatePersonResult friend1 = await faceClient.PersonGroupPerson.CreateAsync(
7777
// Define Bill and Clare in the same way
7878
```
7979
### <a name="step2-2"></a> Step 2.2: Detect faces and register them to the correct person
80-
Detection is done by sending a "POST" web request to the [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API with the image file in the HTTP request body. When you use the client library, face detection is done through the DetectAsync method for the FaceClient class.
80+
Detection is done by sending a "POST" web request to the [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API with the image file in the HTTP request body. When you use the client library, face detection is done through one of the Detect..Async methods of the FaceClient class.
8181

8282
For each face that's detected, call [PersonGroup Person – Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) to add it to the correct person.
8383

@@ -138,7 +138,7 @@ string testImageFile = @"D:\Pictures\test_img1.jpg";
138138

139139
using (Stream s = File.OpenRead(testImageFile))
140140
{
141-
var faces = await faceClient.Face.DetectAsync(s);
141+
var faces = await faceClient.Face.DetectWithStreamAsync(s);
142142
var faceIds = faces.Select(face => face.FaceId).ToArray();
143143

144144
var results = await faceClient.Face.IdentifyAsync(faceIds, personGroupId);

articles/hpc-cache/hpc-cache-mount.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ Example:
3636

3737
```
3838
root@test-client:/tmp# mkdir hpccache
39-
root@test-client:/tmp# sudo mount 10.0.0.28:/blob-demo-0722 ./hpccache/ -orw,tcp,mountproto=tcp,vers3,hard,intr
39+
root@test-client:/tmp# sudo mount 10.0.0.28:/blob-demo-0722 ./hpccache/ -orw,tcp,mountproto=tcp,vers3,hard
4040
root@test-client:/tmp#
4141
```
4242

@@ -49,7 +49,7 @@ After this command succeeds, the contents of the storage export should be visibl
4949

5050
For a robust client mount, pass these settings and arguments in your mount command:
5151

52-
``mount -o hard,nointr,proto=tcp,mountproto=tcp,retry=30 ${CACHE_IP_ADDRESS}:/${NAMESPACE_PATH} ${LOCAL_FILESYSTEM_MOUNT_POINT}``
52+
``mount -o hard,proto=tcp,mountproto=tcp,retry=30 ${CACHE_IP_ADDRESS}:/${NAMESPACE_PATH} ${LOCAL_FILESYSTEM_MOUNT_POINT}``
5353

5454
| Recommended mount command settings | |
5555
--- | ---

0 commit comments

Comments
 (0)