Skip to content

Commit a4d995f

Browse files
authored
Merge pull request #253376 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to main to sync with https://github.com/MicrosoftDocs/azure-docs (branch main)
2 parents d5c6e66 + 4c83202 commit a4d995f

File tree

8 files changed

+51
-7
lines changed

8 files changed

+51
-7
lines changed

articles/active-directory/hybrid/connect/reference-connect-sync-functions-reference.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -710,7 +710,7 @@ Returns the position where the substring was found or 0 if not found.
710710

711711
**Example:**
712712
`InStr("The quick brown fox","quick")`
713-
Evalues to 5
713+
Evaluates to 5
714714

715715
`InStr("repEated","e",3,vbBinaryCompare)`
716716
Evaluates to 7

articles/azure-vmware/request-host-quota-azure-vmware-solution.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -35,13 +35,14 @@ You'll need an Azure account in an Azure subscription that adheres to one of the
3535
- **Service:** All services > Azure VMware Solution
3636
- **Resource:** General question
3737
- **Summary:** Need capacity
38-
- **Problem type:** Capacity Management Issues
39-
- **Problem subtype:** Customer Request for Additional Host Quota/Capacity
38+
- **Problem type:** Deployment
39+
- **Problem subtype:** AVS Quota request
4040

4141
1. In the **Description** of the support ticket, on the **Details** tab, provide information for:
4242

4343
- Region Name
4444
- Number of hosts
45+
- Host SKU type
4546
- Any other details, including Availability Zone requirements for integrating with other Azure services (e.g. Azure NetApp Files, Azure Blob Storage)
4647

4748
>[!NOTE]

articles/machine-learning/prompt-flow/how-to-end-to-end-llmops-with-prompt-flow.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -108,7 +108,7 @@ Before you can set up a Prompt flow project with Azure Machine Learning, you nee
108108

109109
:::image type="content" source="./media/how-to-end-to-end-llmops-with-prompt-flow/github-settings.png" alt-text="Screenshot of the GitHub menu bar on a GitHub project with settings selected. " lightbox = "./media/how-to-end-to-end-llmops-with-prompt-flow/github-settings.png":::
110110

111-
1. Then select **Secrets**, then **Actions**:
111+
1. Then select **Secrets and variables**, then **Actions**:
112112

113113
:::image type="content" source="./media/how-to-end-to-end-llmops-with-prompt-flow/github-secrets.png" alt-text="Screenshot of on GitHub showing the security settings with security and actions highlighted." lightbox = "./media/how-to-end-to-end-llmops-with-prompt-flow/github-secrets.png":::
114114

articles/static-web-apps/build-configuration.md

Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -222,6 +222,7 @@ inputs:
222222
```
223223

224224
---
225+
225226
## Skip building the API
226227

227228
If you want to skip building the API, you can bypass the automatic build and deploy the API built in a previous step.
@@ -304,6 +305,44 @@ inputs:
304305

305306
---
306307

308+
## Run workflow without deployment secrets
309+
310+
Sometimes you need your workflow to continue to process even when some secrets are missing. Set the `SKIP_DEPLOY_ON_MISSING_SECRETS` environment variable to `true` to configure your workflow to proceed without defined secrets.
311+
312+
When enabled, this feature allows the workflow to continue without deploying the site's content.
313+
314+
# [GitHub Actions](#tab/github-actions)
315+
316+
```yaml
317+
...
318+
319+
with:
320+
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN }}
321+
repo_token: ${{ secrets.GITHUB_TOKEN }}
322+
action: 'upload'
323+
app_location: 'src'
324+
api_location: 'api'
325+
output_location: 'public'
326+
env:
327+
SKIP_DEPLOY_ON_MISSING_SECRETS: true
328+
```
329+
330+
# [Azure Pipelines](#tab/azure-devops)
331+
332+
```yaml
333+
...
334+
335+
inputs:
336+
app_location: 'src'
337+
api_location: 'api'
338+
output_location: 'public'
339+
azure_static_web_apps_api_token: $(deployment_token)
340+
env:
341+
SKIP_DEPLOY_ON_MISSING_SECRETS: true
342+
```
343+
344+
---
345+
307346
## Environment variables
308347

309348
You can set environment variables for your build via the `env` section of a job's configuration.

articles/storage/blobs/anonymous-read-access-overview.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,8 @@ To remediate anonymous access, first determine whether your storage account uses
2929

3030
If your storage account is using the Azure Resource Manager deployment model, then you can remediate anonymous access for an account at any time by setting the account's **AllowBlobPublicAccess** property to **False**. After you set the **AllowBlobPublicAccess** property to **False**, all requests for blob data to that storage account will require authorization, regardless of the anonymous access setting for any individual container.
3131

32+
If your storage account is using the Azure Resource Manager deployment model, then you can remediate anonymous access for an account at any time by setting the account's **AllowBlobAnonymousAccess** property to **False**. After you set the **AllowBlobAnonymousAccess** property to **False**, all requests for blob data to that storage account will require authorization, regardless of the anonymous access setting for any individual container.
33+
3234
To learn more about how to remediate anonymous access for Azure Resource Manager accounts, see [Remediate anonymous read access to blob data (Azure Resource Manager deployments)](anonymous-read-access-prevent.md).
3335

3436
### Classic accounts

articles/storage/blobs/immutable-storage-overview.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -156,6 +156,8 @@ If you fail to pay your bill and your account has an active time-based retention
156156

157157
## Feature support
158158

159+
This feature is incompatible with Point in Time Restore and Last Access Tracking.
160+
159161
[!INCLUDE [Blob Storage feature support in Azure Storage accounts](../../../includes/azure-storage-feature-support.md)]
160162

161163
## Next steps

articles/storage/common/storage-ref-azcopy-configuration-settings.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ The following table describes each environment variable and provides links to co
3232
| AZCOPY_CONCURRENCY_VALUE | Specifies the number of concurrent requests that can occur. You can use this variable to increase throughput. If your computer has fewer than 5 CPUs, then the value of this variable is set to `32`. Otherwise, the default value is equal to 16 multiplied by the number of CPUs. The maximum default value of this variable is `3000`, but you can manually set this value higher or lower. See [Increase concurrency](storage-use-azcopy-optimize.md#increase-concurrency) |
3333
| AZCOPY_CONCURRENT_FILES | Overrides the (approximate) number of files that are in progress at any one time, by controlling how many files we concurrently initiate transfers for. |
3434
| AZCOPY_CONCURRENT_SCAN | Controls the (max) degree of parallelism used during scanning. Only affects parallelized enumerators, which include Azure Files/Blobs, and local file systems. |
35-
| AZCOPY_CONTENT_TYPE_MAP | Overrides one or more of the default MIME type mappings defined by your operating system. Set this variable to the path of a JSON file that defines any mapping. Here's the contents of an example JSON file: <br><br> {<br>&nbsp;&nbsp;"MIMETypeMapping": { <br>&nbsp;&nbsp;&nbsp;&nbsp;".323": "text/h323",<br>&nbsp;&nbsp;&nbsp;&nbsp;".aaf": "application/octet-stream",<br>&nbsp;&nbsp;&nbsp; ".aca": "application/octet-stream",<br>&nbsp;&nbsp;&nbsp;&nbsp;".accdb": "application/msaccess",<br>&nbsp;&nbsp;&nbsp;&nbsp; }<br>}
35+
| AZCOPY_CONTENT_TYPE_MAP | Overrides one or more of the default MIME type mappings defined by your operating system. Set this variable to the path of a JSON file that defines any mapping. Here's the contents of an example JSON file: <br><br> {<br>&nbsp;&nbsp;"MIMETypeMapping": { <br>&nbsp;&nbsp;&nbsp;&nbsp;".323": "text/h323",<br>&nbsp;&nbsp;&nbsp;&nbsp;".aaf": "application/octet-stream",<br>&nbsp;&nbsp;&nbsp; ".aca": "application/octet-stream",<br>&nbsp;&nbsp;&nbsp;&nbsp;".accdb": "application/msaccess"<br>&nbsp;&nbsp;&nbsp;&nbsp; }<br>}
3636
|
3737
| AZCOPY_DEFAULT_SERVICE_API_VERSION | Overrides the service API version so that AzCopy could accommodate custom environments such as Azure Stack. |
3838
| AZCOPY_DISABLE_HIERARCHICAL_SCAN | Applies only when Azure Blobs is the source. Concurrent scanning is faster but employs the hierarchical listing API, which can result in more IOs/cost. Specify 'true' to sacrifice performance but save on cost. |

articles/synapse-analytics/machine-learning/tutorial-horovod-pytorch.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: 'Tutorial: Distributed training with Horovod and Pytorch'
2+
title: 'Tutorial: Distributed training with Horovod and PyTorch'
33
description: Tutorial on how to run distributed training with the Horovod Estimator and PyTorch
44
ms.service: synapse-analytics
55
ms.subservice: machine-learning
@@ -251,4 +251,4 @@ To ensure the Spark instance is shut down, end any connected sessions(notebooks)
251251
## Next steps
252252

253253
* [Check out Synapse sample notebooks](https://github.com/Azure-Samples/Synapse/tree/main/MachineLearning)
254-
* [Learn more about GPU-enabled Apache Spark pools](../spark/apache-spark-gpu-concept.md)
254+
* [Learn more about GPU-enabled Apache Spark pools](../spark/apache-spark-gpu-concept.md)

0 commit comments

Comments
 (0)