Skip to content

Commit 5b407b8

Browse files
authored
Merge pull request #210298 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to main to sync with https://github.com/MicrosoftDocs/azure-docs (branch main)
2 parents 9b6d16c + bcfba5f commit 5b407b8

File tree

10 files changed

+23
-21
lines changed

10 files changed

+23
-21
lines changed

articles/app-service/configure-connect-to-azure-storage.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -106,7 +106,6 @@ The following features are supported for Linux containers:
106106
- Mapping `/mounts`, `mounts/foo/bar`, `/`, and `/mounts/foo.bar/` to custom-mounted storage is not supported (you can only use /mounts/pathname for mounting custom storage to your web app.)
107107
- Storage mounts cannot be used together with clone settings option during [deployment slot](deploy-staging-slots.md) creation.
108108
- Storage mounts are not backed up when you [back up your app](manage-backup.md). Be sure to follow best practices to back up the Azure Storage accounts.
109-
- Only Azure Files [SMB](../storage/files/files-smb-protocol.md) are supported. Azure Files [NFS](../storage/files/files-nfs-protocol.md) is not currently supported for Linux App Services.
110109

111110
::: zone-end
112111

@@ -118,7 +117,7 @@ The following features are supported for Linux containers:
118117
- FTP/FTPS access to mounted storage not supported (use [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/)).
119118
- Mapping `[C-Z]:\`, `[C-Z]:\home`, `/`, and `/home` to custom-mounted storage is not supported.
120119
- Storage mounts cannot be used together with clone settings option during [deployment slot](deploy-staging-slots.md) creation.
121-
- Storage mounts are not backed up when you [back up your app](manage-backup.md). Be sure to follow best practices to back up the Azure Storage accounts.
120+
- Storage mounts are not backed up when you [back up your app](manage-backup.md). Be sure to follow best practices to back up the Azure Storage accounts.
122121

123122
> [!NOTE]
124123
> Ensure ports 80 and 445 are open when using Azure Files with VNET integration.
@@ -134,6 +133,7 @@ The following features are supported for Linux containers:
134133
- Don't map the custom storage mount to `/tmp` or its subdirectories as this may cause timeout during app startup.
135134
- Storage mounts cannot be used together with clone settings option during [deployment slot](deploy-staging-slots.md) creation.
136135
- Storage mounts are not backed up when you [back up your app](manage-backup.md). Be sure to follow best practices to back up the Azure Storage accounts.
136+
- Only Azure Files [SMB](../storage/files/files-smb-protocol.md) are supported. Azure Files [NFS](../storage/files/files-nfs-protocol.md) is not currently supported for Linux App Services.
137137

138138
> [!NOTE]
139139
> When VNET integration is used, ensure the following ports are open:
@@ -343,4 +343,4 @@ To validate that the Azure Storage is mounted successfully for the app:
343343
- [Configure a custom container](configure-custom-container.md?pivots=platform-linux).
344344
- [Video: How to mount Azure Storage as a local share](https://www.youtube.com/watch?v=OJkvpWYr57Y).
345345

346-
::: zone-end
346+
::: zone-end

articles/event-grid/event-filtering.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -353,7 +353,7 @@ FOR_EACH filter IN (a, b, c)
353353
See [Limitations](#limitations) section for current limitation of this operator.
354354

355355
## StringBeginsWith
356-
The **StringBeginsWith** operator evaluates to true if the **key** value **begins with** any of the specified **filter** values. In the following example, it checks whether the value of the `key1` attribute in the `data` section begins with `event` or `grid`. For example, `event hubs` begins with `event`.
356+
The **StringBeginsWith** operator evaluates to true if the **key** value **begins with** any of the specified **filter** values. In the following example, it checks whether the value of the `key1` attribute in the `data` section begins with `event` or `message`. For example, `event hubs` begins with `event`.
357357

358358
```json
359359
"advancedFilters": [{

articles/event-grid/system-topics.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,9 @@ In the past, a system topic was implicit and wasn't exposed for simplicity. Syst
2626
- Set up alerts on publish and delivery failures
2727

2828
> [!NOTE]
29-
> Azure Event Grid creates a system topic resource in the same Azure subscription that has the event source. For example, if you create a system topic for a storage account *ContosoStorage* in an Azure subscription *ContosoSubscription*, Event Grid creates the system topic in the *ContosoSubscription*. It's not possible to create a system topic in an Azure subscription that's different from the event source's Azure subscription.
29+
> - Only one Azure Event Grid system topic is allowed per source (like Subscription, Resource Group, etc.).
30+
> - Resource Group is required for Subscription level Event Grid system topic and cannot be changed until deleted/moved to another subscription.
31+
> - Azure Event Grid creates a system topic resource in the same Azure subscription that has the event source. For example, if you create a system topic for a storage account *ContosoStorage* in an Azure subscription *ContosoSubscription*, Event Grid creates the system topic in the *ContosoSubscription*. It's not possible to create a system topic in an Azure subscription that's different from the event source's Azure subscription.
3032
3133
## Lifecycle of system topics
3234
You can create a system topic in two ways:

articles/iot-hub/iot-hub-tls-support.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -104,7 +104,7 @@ For IoT Hubs not configured for TLS 1.2 enforcement, TLS 1.2 still works with th
104104
* `TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA`
105105
* `TLS_RSA_WITH_AES_256_CBC_SHA`
106106
* `TLS_RSA_WITH_AES_128_CBC_SHA`
107-
* `TLS_RSA_WITH_3DES_EDE_CBC_SHA`
107+
* `TLS_RSA_WITH_3DES_EDE_CBC_SHA` **(This cipher will be deprecated on 10/01/2022 and will no longer be used for TLS handshakes)**
108108

109109
A client can suggest a list of higher cipher suites to use during `ClientHello`. However, some of them might not be supported by IoT Hub (for example, `ECDHE-ECDSA-AES256-GCM-SHA384`). In this case, IoT Hub will try to follow the preference of the client, but eventually negotiate down the cipher suite with `ServerHello`.
110110

articles/machine-learning/v1/how-to-cicd-data-ingestion.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -106,7 +106,7 @@ steps:
106106
artifact: di-notebooks
107107
```
108108
109-
The pipeline uses [flake8](https://pypi.org/project/flake8/) to do the Python code linting. It runs the unit tests defined in the source code and publishes the linting and test results so they're available in the Azure Pipeline execution screen.
109+
The pipeline uses [flake8](https://pypi.org/project/flake8/) to do the Python code linting. It runs the unit tests defined in the source code and publishes the linting and test results so they're available in the Azure Pipelines execution screen.
110110
111111
If the linting and unit testing is successful, the pipeline will copy the source code to the artifact repository to be used by the subsequent deployment steps.
112112
@@ -209,7 +209,7 @@ The values in the JSON file are default values configured in the pipeline defini
209209

210210
The Continuous Delivery process takes the artifacts and deploys them to the first target environment. It makes sure that the solution works by running tests. If successful, it continues to the next environment.
211211

212-
The CD Azure Pipeline consists of multiple stages representing the environments. Each stage contains [deployments](/azure/devops/pipelines/process/deployment-jobs) and [jobs](/azure/devops/pipelines/process/phases?tabs=yaml) that perform the following steps:
212+
The CD Azure Pipelines consists of multiple stages representing the environments. Each stage contains [deployments](/azure/devops/pipelines/process/deployment-jobs) and [jobs](/azure/devops/pipelines/process/phases?tabs=yaml) that perform the following steps:
213213

214214
* Deploy a Python Notebook to Azure Databricks workspace
215215
* Deploy an Azure Data Factory pipeline
@@ -479,4 +479,4 @@ stages:
479479

480480
* [Source Control in Azure Data Factory](/azure/data-factory/source-control)
481481
* [Continuous integration and delivery in Azure Data Factory](/azure/data-factory/continuous-integration-delivery)
482-
* [DevOps for Azure Databricks](https://marketplace.visualstudio.com/items?itemName=riserrad.azdo-databricks)
482+
* [DevOps for Azure Databricks](https://marketplace.visualstudio.com/items?itemName=riserrad.azdo-databricks)

articles/storage/common/storage-use-azurite.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ ms.custom: devx-track-csharp
1313

1414
# Use the Azurite emulator for local Azure Storage development
1515

16-
The Azurite open-source emulator provides a free local environment for testing your Azure blob, queue storage, and table storage applications. When you're satisfied with how your application is working locally, switch to using an Azure Storage account in the cloud. The emulator provides cross-platform support on Windows, Linux, and macOS.
16+
The Azurite open-source emulator provides a free local environment for testing your Azure Blob, Queue Storage, and Table Storage applications. When you're satisfied with how your application is working locally, switch to using an Azure Storage account in the cloud. The emulator provides cross-platform support on Windows, Linux, and macOS.
1717

1818
Azurite is the future storage emulator platform. Azurite supersedes the [Azure Storage Emulator](storage-use-emulator.md). Azurite will continue to be updated to support the latest versions of Azure Storage APIs.
1919

articles/stream-analytics/stream-analytics-stream-analytics-query-patterns.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -109,7 +109,7 @@ For more information, see [**WITH** clause](/stream-analytics-query/with-azure-s
109109

110110
## Simple pass-through query
111111

112-
A simple pass-through query can be used to copy the input stream data into the output. For example, if a stream of data containing real-time vehicle information needs to be saved in a SQL database for letter analysis, a simple pass-through query will do the job.
112+
A simple pass-through query can be used to copy the input stream data into the output. For example, if a stream of data containing real-time vehicle information needs to be saved in a SQL database for later analysis, a simple pass-through query will do the job.
113113

114114
**Input**:
115115

articles/synapse-analytics/spark/apache-spark-what-is-delta-lake.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,9 +13,9 @@ ms.reviewer: euang
1313

1414
# What is Delta Lake
1515

16-
Azure Synapse Analytics is compatible with Linux Foundation Delta Lake. Delta Lake is an open-source storage layer that brings ACID (atomicity, consistency, isolation, and durability) transactions to Apache Spark and big data workloads.
16+
Delta Lake is an open-source storage layer that brings ACID (atomicity, consistency, isolation, and durability) transactions to Apache Spark and big data workloads.
1717

18-
The current version of Delta Lake included with Azure Synapse has language support for Scala, PySpark, and .NET. There are links at the bottom of the page to more detailed examples and documentation.
18+
The current version of Delta Lake included with Azure Synapse has language support for Scala, PySpark, and .NET and is compatible with Linux Foundation Delta Lake. There are links at the bottom of the page to more detailed examples and documentation.
1919

2020
## Key features
2121

articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-source-control-integration.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ This tutorial outlines how to integrate your SQL Server Data Tools (SSDT) databa
2222

2323
## Set up and connect to Azure DevOps
2424

25-
1. In your Azure DevOps Organization, create a project that will host your SSDT database project via an Azure Repo repository.
25+
1. In your Azure DevOps Organization, create a project that will host your SSDT database project via an Azure Repos repository.
2626

2727
![Create Project](./media/sql-data-warehouse-source-control-integration/1-create-project-azure-devops.png "Create Project")
2828

@@ -60,28 +60,28 @@ For more information about connecting projects using Visual Studio, see the [Con
6060

6161
![Commit](./media/sql-data-warehouse-source-control-integration/6.5-commit-push-changes.png "Commit")
6262

63-
4. Now that you have the changes committed locally in the cloned repository, sync and push your changes to your Azure Repo repository in your Azure DevOps project.
63+
4. Now that you have the changes committed locally in the cloned repository, sync and push your changes to your Azure Repos repository in your Azure DevOps project.
6464

6565
![Sync and Push - staging](./media/sql-data-warehouse-source-control-integration/7-commit-push-changes.png "Sync and push - staging")
6666

6767
![Sync and Push](./media/sql-data-warehouse-source-control-integration/7.5-commit-push-changes.png "Sync and push")
6868

6969
## Validation
7070

71-
1. Verify changes have been pushed to your Azure Repo by updating a table column in your database project from Visual Studio SQL Server Data Tools (SSDT).
71+
1. Verify changes have been pushed to your Azure Repos by updating a table column in your database project from Visual Studio SQL Server Data Tools (SSDT).
7272

7373
![Validate update column](./media/sql-data-warehouse-source-control-integration/8-validation-update-column.png "Validate update column")
7474

7575
2. Commit and push the change from your local repository to your Azure Repo.
7676

7777
![Push changes](./media/sql-data-warehouse-source-control-integration/9-push-column-change.png "Push changes")
7878

79-
3. Verify the change has been pushed in your Azure Repo repository.
79+
3. Verify the change has been pushed in your Azure Repos repository.
8080

8181
![Verify](./media/sql-data-warehouse-source-control-integration/10-verify-column-change-pushed.png "Verify changes")
8282

83-
4. (**Optional**) Use Schema Compare and update the changes to your target dedicated SQL pool using SSDT to ensure the object definitions in your Azure Repo repository and local repository reflect your dedicated SQL pool.
83+
4. (**Optional**) Use Schema Compare and update the changes to your target dedicated SQL pool using SSDT to ensure the object definitions in your Azure Repos repository and local repository reflect your dedicated SQL pool.
8484

8585
## Next steps
8686

87-
- [Developing for dedicated SQL pool](sql-data-warehouse-overview-develop.md)
87+
- [Developing for dedicated SQL pool](sql-data-warehouse-overview-develop.md)

articles/synapse-analytics/sql/query-delta-lake-format.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
22
title: Query Delta Lake format using serverless SQL pool
3-
description: In this article, you'll learn how to query files stored in Apache Delta Lake format using serverless SQL pool.
3+
description: In this article, you'll learn how to query files stored in Delta Lake format using serverless SQL pool.
44
services: synapse analytics
55
ms.service: synapse-analytics
66
ms.topic: how-to
@@ -14,7 +14,7 @@ ms.custom: ignite-fall-2021
1414

1515
# Query Delta Lake files using serverless SQL pool in Azure Synapse Analytics
1616

17-
In this article, you'll learn how to write a query using serverless Synapse SQL pool to read Apache Delta Lake files.
17+
In this article, you'll learn how to write a query using serverless Synapse SQL pool to read Delta Lake files.
1818
Delta Lake is an open-source storage layer that brings ACID (atomicity, consistency, isolation, and durability) transactions to Apache Spark and big data workloads.
1919

2020
The serverless SQL pool in Synapse workspace enables you to read the data stored in Delta Lake format, and serve it to reporting tools.

0 commit comments

Comments
 (0)