Skip to content

Commit 2171e20

Browse files
authored
Merge pull request #226927 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to main to sync with https://github.com/MicrosoftDocs/azure-docs (branch main)
2 parents 36971c6 + 455145e commit 2171e20

File tree

6 files changed

+17
-13
lines changed

6 files changed

+17
-13
lines changed

articles/azure-arc/data/release-notes.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1018,9 +1018,9 @@ This release introduces the following breaking changes:
10181018

10191019
### Additional changes
10201020

1021-
* A new optional parameter was added to `azdata arc postgres server create` called `--volume-claim mounts`. The value is a comma-separated list of volume claim mounts. A volume claim mount is a pair of volume type and PVC name. The only volume type currently supported is `backup`. In PostgreSQL, when volume type is `backup`, the PVC is mounted to `/mnt/db-backups`. This enables sharing backups between PostgresSQL instances so that the backup of one PostgresSQL instance can be restored in another instance.
1021+
* A new optional parameter was added to `azdata arc postgres server create` called `--volume-claim mounts`. The value is a comma-separated list of volume claim mounts. A volume claim mount is a pair of volume type and PVC name. The only volume type currently supported is `backup`. In PostgreSQL, when volume type is `backup`, the PVC is mounted to `/mnt/db-backups`. This enables sharing backups between PostgreSQL instances so that the backup of one PostgreSQL instance can be restored in another instance.
10221022

1023-
* New short names for PostgresSQL custom resource definitions:
1023+
* New short names for PostgreSQL custom resource definitions:
10241024
* `pg11`
10251025
* `pg12`
10261026
* Telemetry upload provides user with either:

articles/cosmos-db/monitor-request-unit-usage.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ To get the request unit usage of each operation either by total(sum) or average,
5353

5454
:::image type="content" source="./media/monitor-request-unit-usage/request-unit-usage-operations.png" alt-text="Azure Cosmos DB Request units for operations in Azure monitor":::
5555

56-
If you want to see the request unit usage by collection, select **Apply splitting** and choose the collection name as a filter. You will see a chat like the following with a choice of collections within the dashboard. You can then select a specific collection name to view more details:
56+
If you want to see the request unit usage by collection, select **Apply splitting** and choose the collection name as a filter. You will see a chart like the following with a choice of collections within the dashboard. You can then select a specific collection name to view more details:
5757

5858
:::image type="content" source="./media/monitor-request-unit-usage/request-unit-usage-collection.png" alt-text="Azure Cosmos DB Request units for all operations by the collection in Azure monitor" border="true":::
5959

articles/dns/private-resolver-hybrid-dns.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ In this article, the private zone **azure.contoso.com** and the resource record
5959

6060
## Create an Azure DNS Private Resolver
6161

62-
The following quickstarts are available to help you create a private resolver. These quickstarts walk you through creating a resource group, a virtual network, and Azure DNS Private Resolver. The steps to configure an inbound endpoint, outbound endpoint, and DNS forwarding ruleset are provied:
62+
The following quickstarts are available to help you create a private resolver. These quickstarts walk you through creating a resource group, a virtual network, and Azure DNS Private Resolver. The steps to configure an inbound endpoint, outbound endpoint, and DNS forwarding ruleset are provided:
6363
- [Create a private resolver - portal](dns-private-resolver-get-started-portal.md)
6464
- [Create a private resolver - PowerShell](dns-private-resolver-get-started-powershell.md)
6565

@@ -111,4 +111,4 @@ The path for this query is: client's default DNS resolver (10.100.0.2) > on-prem
111111
* Learn about [Azure DNS Private Resolver endpoints and rulesets](private-resolver-endpoints-rulesets.md).
112112
* Learn how to [Set up DNS failover using private resolvers](tutorial-dns-private-resolver-failover.md)
113113
* Learn about some of the other key [networking capabilities](../networking/fundamentals/networking-overview.md) of Azure.
114-
* [Learn module: Introduction to Azure DNS](/training/modules/intro-to-azure-dns).
114+
* [Learn module: Introduction to Azure DNS](/training/modules/intro-to-azure-dns).

articles/event-grid/storage-upload-process-images.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,9 +6,8 @@ author: normesta
66
ms.service: storage
77
ms.subservice: blobs
88
ms.topic: tutorial
9-
ms.date: 04/04/2022
10-
ms.author: normesta
11-
ms.reviewer: dineshm
9+
ms.date: 02/09/2023
10+
ms.author: spelluru
1211
ms.devlang: csharp, javascript
1312
ms.custom: "devx-track-js, devx-track-csharp, devx-track-azurecli"
1413
---

articles/service-fabric/service-fabric-concept-resource-model.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -181,6 +181,11 @@ To delete an application that was deployed by using the application resource mod
181181
Remove-AzResource -ResourceId <String> [-Force] [-ApiVersion <String>]
182182
```
183183

184+
## Common questions and answers
185+
186+
Error: "Application name must be a prefix of service name"
187+
Answer: Make sure the service name is formatted as follows: ProfileVetSF~CallTicketDataWebApi.
188+
184189
## Next steps
185190

186191
Get information about the application resource model:

articles/synapse-analytics/spark/apache-spark-what-is-delta-lake.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: What is Delta Lake
2+
title: What is Delta Lake?
33
description: Overview of Delta Lake and how it works as part of Azure Synapse Analytics
44
services: synapse-analytics
55
author: jovanpop-msft
@@ -11,7 +11,7 @@ ms.date: 12/06/2022
1111
ms.reviewer: euang
1212
---
1313

14-
# What is Delta Lake
14+
# What is Delta Lake?
1515

1616
Delta Lake is an open-source storage layer that brings ACID (atomicity, consistency, isolation, and durability) transactions to Apache Spark and big data workloads.
1717

@@ -21,16 +21,16 @@ The current version of Delta Lake included with Azure Synapse has language suppo
2121

2222
| Feature | Description |
2323
| --- | --- |
24-
| **ACID Transactions** | Data lakes are typically populated via multiple processes and pipelines, some of which are writing data concurrently with reads. Prior to Delta Lake and the addition of transactions, data engineers had to go through a manual error prone process to ensure data integrity. Delta Lake brings familiar ACID transactions to data lakes. It provides serializability, the strongest level of isolation level. Learn more at [Diving into Delta Lake: Unpacking the Transaction Log](https://databricks.com/blog/2019/08/21/diving-into-delta-lake-unpacking-the-transaction-log.html).|
25-
| **Scalable Metadata Handling** | In big data, even the metadata itself can be "big data". Delta Lake treats metadata just like data, leveraging Spark's distributed processing power to handle all its metadata. As a result, Delta Lake can handle petabyte-scale tables with billions of partitions and files at ease. |
24+
| **ACID Transactions** | Data lakes are typically populated through multiple processes and pipelines, some of which are writing data concurrently with reads. Prior to Delta Lake and the addition of transactions, data engineers had to go through a manual error prone process to ensure data integrity. Delta Lake brings familiar ACID transactions to data lakes. It provides serializability, the strongest level of isolation level. Learn more at [Diving into Delta Lake: Unpacking the Transaction Log](https://databricks.com/blog/2019/08/21/diving-into-delta-lake-unpacking-the-transaction-log.html).|
25+
| **Scalable Metadata Handling** | In big data, even the metadata itself can be "big data." Delta Lake treats metadata just like data, leveraging Spark's distributed processing power to handle all its metadata. As a result, Delta Lake can handle petabyte-scale tables with billions of partitions and files at ease. |
2626
| **Time Travel (data versioning)** | The ability to "undo" a change or go back to a previous version is one of the key features of transactions. Delta Lake provides snapshots of data enabling you to revert to earlier versions of data for audits, rollbacks or to reproduce experiments. Learn more in [Introducing Delta Lake Time Travel for Large Scale Data Lakes](https://databricks.com/blog/2019/02/04/introducing-delta-time-travel-for-large-scale-data-lakes.html). |
2727
| **Open Format** | Apache Parquet is the baseline format for Delta Lake, enabling you to leverage the efficient compression and encoding schemes that are native to the format. |
2828
| **Unified Batch and Streaming Source and Sink** | A table in Delta Lake is both a batch table, as well as a streaming source and sink. Streaming data ingest, batch historic backfill, and interactive queries all just work out of the box. |
2929
| **Schema Enforcement** | Schema enforcement helps ensure that the data types are correct and required columns are present, preventing bad data from causing data inconsistency. For more information, see [Diving Into Delta Lake: Schema Enforcement & Evolution](https://databricks.com/blog/2019/09/24/diving-into-delta-lake-schema-enforcement-evolution.html) |
3030
| **Schema Evolution** | Delta Lake enables you to make changes to a table schema that can be applied automatically, without having to write migration DDL. For more information, see [Diving Into Delta Lake: Schema Enforcement & Evolution](https://databricks.com/blog/2019/09/24/diving-into-delta-lake-schema-enforcement-evolution.html) |
3131
| **Audit History** | Delta Lake transaction log records details about every change made to data providing a full audit trail of the changes. |
3232
| **Updates and Deletes** | Delta Lake supports Scala / Java / Python and SQL APIs for a variety of functionality. Support for merge, update, and delete operations helps you to meet compliance requirements. For more information, see [Announcing the Delta Lake 0.6.1 Release](https://github.com/delta-io/delta/releases/tag/v0.6.1), [Announcing the Delta Lake 0.7 Release](https://github.com/delta-io/delta/releases/tag/v0.7.0) and [Simple, Reliable Upserts and Deletes on Delta Lake Tables using Python APIs](https://databricks.com/blog/2019/10/03/simple-reliable-upserts-and-deletes-on-delta-lake-tables-using-python-apis.html), which includes code snippets for merge, update, and delete DML commands. |
33-
| **100% Compatible with Apache Spark API** | Developers can use Delta Lake with their existing data pipelines with minimal change as it is fully compatible with existing Spark implementations. |
33+
| **100 percent compatible with Apache Spark API** | Developers can use Delta Lake with their existing data pipelines with minimal change as it is fully compatible with existing Spark implementations. |
3434

3535
For full documentation, see the [Delta Lake Documentation Page](https://docs.delta.io/latest/delta-intro.html)
3636

0 commit comments

Comments
 (0)