Skip to content

Commit 2341fda

Browse files
committed
Merge branch 'main' into release-aio-ga
2 parents 8993a64 + 35fd9e2 commit 2341fda

File tree

52 files changed

+145
-136
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

52 files changed

+145
-136
lines changed

articles/app-service/configure-gateway-required-vnet-integration.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ You can't use gateway-required virtual network integration:
3535

3636
To create a gateway:
3737

38-
1. [Create the VPN gateway and subnet](../vpn-gateway/point-to-site-certificate-gateway.md#creategw). Select a route-based VPN type.
38+
1. [Create the VPN gateway and subnet](../vpn-gateway/tutorial-create-gateway-portal.md). Select a route-based VPN type.
3939

4040
1. [Set the point-to-site addresses](../vpn-gateway/point-to-site-certificate-gateway.md#addresspool). If the gateway isn't in the basic SKU, then IKEV2 must be disabled in the point-to-site configuration and SSTP must be selected. The point-to-site address space must be in the RFC 1918 address blocks 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16.
4141

articles/azure-vmware/configure-vsan.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -88,8 +88,8 @@ Run the `Set-vSANCompressDedupe` cmdlet to set preferred space efficiency model.
8888
>[!NOTE]
8989
>Setting Compression to False and Deduplication to True sets vSAN to Dedupe and Compression.
9090
>Setting Compression to False and Dedupe to False, disables all space efficiency.
91-
>Azure VMware Solution default is Dedupe and Compression
92-
>Compression only provides slightly better performance
91+
>Azure VMware Solution default is Dedupe and Compression.
92+
>Compression only provides slightly better performance.
9393
>Disabling both compression and deduplication offers the greatest performance gains, however at the cost of space utilization.
9494
9595
1. Check **Notifications** to see the progress.

articles/communication-services/concepts/call-automation/call-automation.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -197,6 +197,12 @@ Operation Callback URI is an optional parameter in some mid-call APIs that use e
197197
| `Recognize` | `RecognizeCompleted` / `RecognizeFailed` / `RecognizeCanceled` |
198198
| `StopContinuousDTMFRecognition` | `ContinuousDtmfRecognitionStopped` |
199199
| `SendDTMF` | `ContinuousDtmfRecognitionToneReceived` / `ContinuousDtmfRecognitionToneFailed` |
200+
| `Hold` | `HoldFailed` |
201+
| `StartMediaStreaming` | `MediaStreamingStarted` / `MediaStreamingFailed` |
202+
| `StopMediaStreaming` | `MediaStreamingStopped` / `MediaStreamingFailed` |
203+
| `StartTranscription` | `TranscriptionStarted` / `TranscriptionFailed` |
204+
| `UpdateTranscription` | `TranscriptionUpdated` / `TranscriptionFailed` |
205+
| `StopTranscription` | `TranscriptionStopped` / `TranscriptionFailed` |
200206

201207
## Next steps
202208

articles/cost-management-billing/dataset-schema/cost-usage-details-focus.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,10 @@ To learn more about FOCUS, see [FOCUS: A new specification for cloud cost transp
2020

2121
You can view the latest changes to the FOCUS cost and usage details file schema in the [FinOps Open Cost and Usage Specification changelog](https://github.com/FinOps-Open-Cost-and-Usage-Spec/FOCUS_Spec/blob/working_draft/CHANGELOG.md).
2222

23+
#### Note: Version 1.0r2
24+
25+
FOCUS 1.0r2 is a follow-up release to the FOCUS 1.0 dataset that changes how date columns are formatted, which may impact anyone who is parsing and especially modifying these values. The 1.0r2 dataset is still aligned with the FOCUS 1.0 specification. The "r2" indicates this is the second release of that 1.0 specification. The only change in this release is that all date columns now include seconds to more closely adhere to the FOCUS 1.0 specification. As an example, a 1.0 export may use "2024-01-01T00:00Z" and a 1.0r2 export would use "2024-01-01T00:00:00Z". The only difference is the extra ":00" for seconds at the end of the time segment of the ISO formatted date string.
26+
2327
## Version 1.0
2428

2529
| Column | Fields | Description |

articles/data-factory/connector-couchbase.md

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: jianleishen
66
ms.subservice: data-movement
77
ms.custom: synapse
88
ms.topic: conceptual
9-
ms.date: 10/12/2024
9+
ms.date: 11/05/2024
1010
ms.author: jianleishen
1111
---
1212
# Copy data from Couchbase using Azure Data Factory (Preview)
@@ -34,10 +34,6 @@ The service provides a built-in driver to enable connectivity, therefore you don
3434

3535
The connector supports the Couchbase version higher than 6.0.
3636

37-
The connector now uses the following precision. The previous precision is compatible.
38-
- Double values use 17 significant digits (previously 15 significant digits)
39-
- Float values use 9 significant digits (previously 7 significant digits)
40-
4137
## Prerequisites
4238

4339
[!INCLUDE [data-factory-v2-integration-runtime-requirements](includes/data-factory-v2-integration-runtime-requirements.md)]

articles/data-factory/connector-google-bigquery-legacy.md

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ author: jianleishen
77
ms.subservice: data-movement
88
ms.topic: conceptual
99
ms.custom: synapse
10-
ms.date: 05/22/2024
10+
ms.date: 11/05/2024
1111
---
1212

1313
# Copy data from Google BigQuery using Azure Data Factory or Synapse Analytics (legacy)
@@ -33,6 +33,11 @@ For a list of data stores that are supported as sources or sinks by the copy act
3333

3434
The service provides a built-in driver to enable connectivity. Therefore, you don't need to manually install a driver to use this connector.
3535

36+
The connector supports the Windows versions in this [article](create-self-hosted-integration-runtime.md#prerequisites).
37+
38+
The connector no longer supports P12 keyfiles. If you rely on service accounts, you are recommended to use JSON keyfiles instead. The P12CustomPwd property used for supporting the P12 keyfile was also deprecated. For more information, see this [article](https://cloud.google.com/sdk/docs/release-notes).
39+
40+
3641
>[!NOTE]
3742
>This Google BigQuery connector is built on top of the BigQuery APIs. Be aware that BigQuery limits the maximum rate of incoming requests and enforces appropriate quotas on a per-project basis, refer to [Quotas & Limits - API requests](https://cloud.google.com/bigquery/quotas#api_requests). Make sure you do not trigger too many concurrent requests to the account.
3843

articles/data-factory/connector-google-bigquery.md

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ author: jianleishen
77
ms.subservice: data-movement
88
ms.topic: conceptual
99
ms.custom: synapse
10-
ms.date: 10/09/2024
10+
ms.date: 11/05/2024
1111
---
1212

1313
# Copy data from Google BigQuery using Azure Data Factory or Synapse Analytics
@@ -34,10 +34,6 @@ For a list of data stores that are supported as sources or sinks by the copy act
3434

3535
The service provides a built-in driver to enable connectivity. Therefore, you don't need to manually install a driver to use this connector.
3636

37-
The connector supports the Windows versions in this [article](create-self-hosted-integration-runtime.md#prerequisites).
38-
39-
The connector no longer supports P12 keyfiles. If you rely on service accounts, you are recommended to use JSON keyfiles instead. The P12CustomPwd property used for supporting the P12 keyfile was also deprecated. For more information, see this [article](https://cloud.google.com/sdk/docs/release-notes#bigquery_6).
40-
4137
>[!NOTE]
4238
>This Google BigQuery connector is built on top of the BigQuery APIs. Be aware that BigQuery limits the maximum rate of incoming requests and enforces appropriate quotas on a per-project basis, refer to [Quotas & Limits - API requests](https://cloud.google.com/bigquery/quotas#api_requests). Make sure you do not trigger too many concurrent requests to the account.
4339

articles/data-factory/connector-shopify.md

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ author: jianleishen
77
ms.subservice: data-movement
88
ms.topic: conceptual
99
ms.custom: synapse
10-
ms.date: 09/12/2024
10+
ms.date: 11/05/2024
1111
---
1212

1313
# Copy data from Shopify using Azure Data Factory or Synapse Analytics (Preview)
@@ -35,9 +35,7 @@ The service provides a built-in driver to enable connectivity, therefore you don
3535

3636
The connector supports the Windows versions in this [article](create-self-hosted-integration-runtime.md#prerequisites).
3737

38-
The billing_on column property was removed from the following tables. For more information, see this [article](https://shopify.dev/docs/api/admin-rest/2024-07/resources/usagecharge).
39-
- Recurring_Application_Charges
40-
- UsageCharge
38+
The billing_on column property was removed from the Recurring_Application_Charges and UsageCharge tables due to Shopify's official deprecation of billing_on field.
4139

4240
## Getting started
4341

articles/data-factory/connector-xero.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: jianleishen
66
ms.subservice: data-movement
77
ms.custom: synapse
88
ms.topic: conceptual
9-
ms.date: 09/12/2024
9+
ms.date: 11/05/2024
1010
ms.author: jianleishen
1111
---
1212
# Copy data from Xero using Azure Data Factory or Synapse Analytics
@@ -37,6 +37,9 @@ Specifically, this Xero connector supports:
3737
- All Xero tables (API endpoints) except "Reports".
3838
- Windows versions in this [article](create-self-hosted-integration-runtime.md#prerequisites).
3939

40+
>[!NOTE]
41+
>Due to the [sunset of OAuth 1.0 authentication in Xero](https://devblog.xero.com/an-update-on-why-we-are-saying-goodbye-oauth-1-0a-hello-oauth-2-0-6a839230908f), please [upgrade to OAuth 2.0 authentication type](#linked-service-properties) if you are currently using OAuth 1.0 authentication type.
42+
4043
## Getting started
4144

4245
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]

articles/event-hubs/event-processor-balance-partition-load.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ ms.date: 07/31/2024
1010

1111
To scale your event processing application, you can run multiple instances of the application and have the load balanced among themselves. In the older and deprecated versions, `EventProcessorHost` allowed you to balance the load between multiple instances of your program and checkpoint events when receiving the events. In the newer versions (5.0 onwards), **EventProcessorClient** (.NET and Java), or **EventHubConsumerClient** (Python and JavaScript) allows you to do the same. The development model is made simpler by using events. You can subscribe to the events that you're interested in by registering an event handler. If you're using the old version of the client library, see the following migration guides: [.NET](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/eventhub/Azure.Messaging.EventHubs/MigrationGuide.md), [Java](https://github.com/Azure/azure-sdk-for-java/blob/master/sdk/servicebus/azure-messaging-servicebus/migration-guide.md), [Python](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/servicebus/azure-servicebus/migration_guide.md), and [JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/servicebus/service-bus/migrationguide.md).
1212

13-
This article describes a sample scenario for using multiple instances of client `applications to read events from an event hub. It also gives you details about features of event processor client, which allows you to receive events from multiple partitions at once and load balance with other consumers that use the same event hub and consumer group.
13+
This article describes a sample scenario for using multiple instances of client applications to read events from an event hub. It also gives you details about features of event processor client, which allows you to receive events from multiple partitions at once and load balance with other consumers that use the same event hub and consumer group.
1414

1515
> [!NOTE]
1616
> The key to scale for Event Hubs is the idea of partitioned consumers. In contrast to the [competing consumers](/previous-versions/msp-n-p/dn568101(v=pandp.10)) pattern, the partitioned consumer pattern enables high scale by removing the contention bottleneck and facilitating end to end parallelism.

0 commit comments

Comments
 (0)