Skip to content

Commit ace6961

Browse files
authored
Merge pull request #189635 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to main to sync with https://github.com/MicrosoftDocs/azure-docs (branch main)
2 parents 262ee87 + 10c01d7 commit ace6961

File tree

8 files changed

+25
-15
lines changed

8 files changed

+25
-15
lines changed

articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-faqs.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ CloudKnox is a cloud infrastructure entitlement management (CIEM) solution that
2828

2929
## What are the prerequisites to use CloudKnox?
3030

31-
CloudKnox supports data collection from AWS, GCP, and/or Microsoft Azure. For data collection and analysis, customers are required to have an Azure Active Directory (Azure AD) account to use CloudKnox, however, an Azure subscription or Azure AD P1 or P2 license aren't required to use CloudKnox for AWS or GCP.
31+
CloudKnox supports data collection from AWS, GCP, and/or Microsoft Azure. For data collection and analysis, customers are required to have an Azure Active Directory (Azure AD) account to use CloudKnox.
3232

3333
## Can a customer use CloudKnox if they have other identities with access to their IaaS platform that aren’t yet in Azure AD (for example, if part of their business has Okta or AWS Identity & Access Management (IAM))?
3434

articles/active-directory/external-identities/b2b-tutorial-require-mfa.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -132,6 +132,9 @@ To complete the scenario in this tutorial, you need:
132132

133133
![Screenshot showing the More information required message](media/tutorial-mfa/mfa-required.png)
134134

135+
> [!NOTE]
136+
> You also can configure [cross-tenant access settings](cross-tenant-access-overview.md) to trust the MFA from the Azure AD home tenant. This allows external Azure AD users to use the MFA registered in their own tenant rather than register in the resource tenant.
137+
135138
1. Sign out.
136139

137140
## Clean up resources

articles/active-directory/hybrid/how-to-connect-password-hash-synchronization.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -119,7 +119,7 @@ Caveat: If there are synchronized accounts that need to have non-expiring passwo
119119
`Set-AzureADUser -ObjectID <User Object ID> -PasswordPolicies "DisablePasswordExpiration"`
120120

121121
> [!NOTE]
122-
> For hybrid users that have a PasswordPolicies value set to `DisablePassordExpiration`, this value switches to `None` after a password change is executed on-premises.
122+
> For hybrid users that have a PasswordPolicies value set to `DisablePasswordExpiration`, this value switches to `None` after a password change is executed on-premises.
123123
124124
> [!NOTE]
125125
> The Set-MsolPasswordPolicy PowerShell command will not work on federated domains.

articles/active-directory/identity-protection/concept-workload-identity-risk.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -41,6 +41,8 @@ To make use of workload identity risk, including the new **Risky workload identi
4141
- Security operator
4242
- Security reader
4343

44+
Users assigned the Conditional Access administrator role can create policies that use risk as a condition.
45+
4446
## Workload identity risk detections
4547

4648
We detect risk on workload identities across sign-in behavior and offline indicators of compromise.
@@ -74,6 +76,10 @@ You can also query risky workload identities [using the Microsoft Graph API](/gr
7476

7577
Organizations can export data by configurating [diagnostic settings in Azure AD](howto-export-risk-data.md) to send risk data to a Log Analytics workspace, archive it to a storage account, stream it to an event hub, or send it to a SIEM solution.
7678

79+
## Enforce access controls with risk-based Conditional Access
80+
81+
Using [Conditional Access for workload identities](../conditional-access/workload-identity.md), you can block access for specific accounts you choose when Identity Protection marks them "at risk." Policy can be applied to single-tenant service principals that have been registered in your tenant. Third-party SaaS, multi-tenanted apps, and managed identities are out of scope.
82+
7783
## Investigate risky workload identities
7884

7985
Identity Protection provides organizations with two reports they can use to investigate workload identity risk. These reports are the risky workload identities, and risk detections for workload identities. All reports allow for downloading of events in .CSV format for further analysis outside of the Azure portal.

articles/azure-monitor/app/opentelemetry-enable.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -209,7 +209,7 @@ public class Program
209209
```
210210

211211
> [!NOTE]
212-
> The `Activity` and `ActivitySource` classes from the `System.Diagnostics` namespace represent the OpenTelemetry concepts of `Span` and `Tracer`, respectively. You create `ActivitySource` directly by using its constructor instead of by using `TracerProvider`. Each [`ActivitySource`](https://github.com/open-telemetry/opentelemetry-dotnet/tree/main/docs/trace/customizing-the-sdk#activity-source) class must be explicitly connected to `TracerProvider` by using `AddSource()`. That's because parts of the OpenTelemetry tracing API are incorporated directly into the .NET runtime. To learn more, see [Introduction to OpenTelemetry .NET Metrics API](https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/src/OpenTelemetry.Api/README.md#introduction-to-opentelemetry-net-tracing-api).
212+
> The `Activity` and `ActivitySource` classes from the `System.Diagnostics` namespace represent the OpenTelemetry concepts of `Span` and `Tracer`, respectively. You create `ActivitySource` directly by using its constructor instead of by using `TracerProvider`. Each [`ActivitySource`](https://github.com/open-telemetry/opentelemetry-dotnet/tree/main/docs/trace/customizing-the-sdk#activity-source) class must be explicitly connected to `TracerProvider` by using `AddSource()`. That's because parts of the OpenTelemetry tracing API are incorporated directly into the .NET runtime. To learn more, see [Introduction to OpenTelemetry .NET Tracing API](https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/src/OpenTelemetry.Api/README.md#introduction-to-opentelemetry-net-tracing-api).
213213
214214
##### [Node.js](#tab/nodejs)
215215

articles/cosmos-db/migrate-continuous-backup.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -160,7 +160,7 @@ az group deployment create -g <ResourceGroup> --template-file <ProvisionTemplate
160160

161161
When migrating from periodic mode to continuous mode, you cannot run any control plane operations that performs account level updates or deletes. For example, operations such as adding or removing regions, account failover, updating backup policy etc. can't be run while the migration is in progress. The time for migration depends on the size of data and the number of regions in your account. Restore action on the migrated accounts only succeeds from the time when migration successfully completes.
162162

163-
You can restore your account after the migration completes. If the migration completes at 1:00 PM PST, you can do point in time restore starting from 1.00 PM PST.
163+
You can restore your account after the migration completes. If the migration completes at 1:00 PM PST, you can do point in time restore starting from 1:00 PM PST.
164164

165165
## Frequently asked questions
166166

articles/iot-hub/iot-hub-devguide-file-upload.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -99,7 +99,7 @@ The following how-to guides provide complete, step-by-step instructions to uploa
9999

100100
The device calls the [Create File Upload SAS URI](/rest/api/iothub/device/create-file-upload-sas-uri) REST API or the equivalent API in one of the device SDKs to initiate a file upload.
101101

102-
**Supported protocols**: AMQP, AMQP-WS, MQTT, MQTT-WS, and HTTPS <br/>
102+
**Supported protocols**: HTTPS <br/>
103103
**Endpoint**: {iot hub}.azure-devices.net/devices/{deviceId}/files <br/>
104104
**Method**: POST
105105

@@ -173,7 +173,7 @@ Working with Azure storage APIs is beyond the scope of this article. In addition
173173

174174
The device calls the [Update File Upload Status](/rest/api/iothub/device/update-file-upload-status) REST API or the equivalent API in one of the device SDKs when it completes the file upload. The device should update the file upload status with IoT Hub regardless of whether the upload succeeds or fails.
175175

176-
**Supported protocols**: AMQP, AMQP-WS, MQTT, MQTT-WS, and HTTPS <br/>
176+
**Supported protocols**: HTTPS <br/>
177177
**Endpoint**: {iot hub}.azure-devices.net/devices/{deviceId}/files/notifications <br/>
178178
**Method**: POST
179179

@@ -242,4 +242,4 @@ Services can use notifications to manage uploads. For example, they can trigger
242242

243243
* [Azure Blob Storage documentation](../storage/blobs/index.yml)
244244

245-
* [Azure IoT device and service SDKs](iot-hub-devguide-sdks.md) lists the various language SDKs you can use when you develop both device and service apps that interact with IoT Hub.
245+
* [Azure IoT device and service SDKs](iot-hub-devguide-sdks.md) lists the various language SDKs you can use when you develop both device and service apps that interact with IoT Hub.

articles/storage/files/storage-troubleshoot-linux-file-connection-problems.md

Lines changed: 9 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -106,9 +106,9 @@ To close open handles for a file share, directory or file, use the [Close-AzStor
106106
- [Fpart](https://github.com/martymac/fpart) - Sorts files and packs them into partitions.
107107
- [Fpsync](https://github.com/martymac/fpart/blob/master/tools/fpsync) - Uses Fpart and a copy tool to spawn multiple instances to migrate data from src_dir to dst_url.
108108
- [Multi](https://github.com/pkolano/mutil) - Multi-threaded cp and md5sum based on GNU coreutils.
109-
- Setting the file size in advance, instead of making every write an extending write, helps improve copy speed in scenarios where the file size is known. If extending writes need to be avoided, you can set a destination file size with `truncate - size <size><file>` command. After that, `dd if=<source> of=<target> bs=1M conv=notrunc`command will copy a source file without having to repeatedly update the size of the target file. For example, you can set the destination file size for every file you want to copy (assume a share is mounted under /mnt/share):
110-
- `$ for i in `` find * -type f``; do truncate --size ``stat -c%s $i`` /mnt/share/$i; done`
111-
- and then - copy files without extending writes in parallel: `$find * -type f | parallel -j6 dd if={} of =/mnt/share/{} bs=1M conv=notrunc`
109+
- Setting the file size in advance, instead of making every write an extending write, helps improve copy speed in scenarios where the file size is known. If extending writes need to be avoided, you can set a destination file size with `truncate --size <size> <file>` command. After that, `dd if=<source> of=<target> bs=1M conv=notrunc`command will copy a source file without having to repeatedly update the size of the target file. For example, you can set the destination file size for every file you want to copy (assume a share is mounted under /mnt/share):
110+
- `for i in `` find * -type f``; do truncate --size ``stat -c%s $i`` /mnt/share/$i; done`
111+
- and then copy files without extending writes in parallel: `find * -type f | parallel -j6 dd if={} of =/mnt/share/{} bs=1M conv=notrunc`
112112

113113
<a id="error115"></a>
114114
## "Mount error(115): Operation now in progress" when you mount Azure Files by using SMB 3.x
@@ -205,10 +205,11 @@ The force flag **f** in COPYFILE results in executing **cp -p -f** on Unix. This
205205

206206
Use the storage account user for copying the files:
207207

208-
- `Useadd : [storage account name]`
209-
- `Passwd [storage account name]`
210-
- `Su [storage account name]`
211-
- `Cp -p filename.txt /share`
208+
- `str_acc_name=[storage account name]`
209+
- `sudo useradd $str_acc_name`
210+
- `sudo passwd $str_acc_name`
211+
- `su $str_acc_name`
212+
- `cp -p filename.txt /share`
212213

213214
## ls: cannot access '&lt;path&gt;': Input/output error
214215

@@ -317,4 +318,4 @@ sudo mount -t cifs $smbPath $mntPath -o vers=3.0,username=$storageAccountName,pa
317318

318319
## Need help? Contact support.
319320

320-
If you still need help, [contact support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) to get your problem resolved quickly.
321+
If you still need help, [contact support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) to get your problem resolved quickly.

0 commit comments

Comments
 (0)