Skip to content

Commit 9adae4f

Browse files
authored
Merge pull request #272722 from MicrosoftDocs/main
Publish to live, Friday 4 AM PST, 4/19
2 parents d4440a3 + a199ef6 commit 9adae4f

File tree

55 files changed

+1288
-82
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

55 files changed

+1288
-82
lines changed

articles/ai-services/speech-service/custom-neural-voice.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ You can tune, adjust, and use your custom voice, similarly as you would use a pr
5151
> [!TIP]
5252
> You can also use the Speech SDK and custom voice REST API to train a custom neural voice.
5353
>
54-
> Check out the code samples in the [Speech SDK repository on GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/custom-voice/README.md) to see how to use personal voice in your application.
54+
> Check out the code samples in the [Speech SDK repository on GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/custom-voice/README.md) to see how to use custom neural voice in your application.
5555
5656
The style and the characteristics of the trained voice model depend on the style and the quality of the recordings from the voice talent used for training. However, you can make several adjustments by using [SSML (Speech Synthesis Markup Language)](./speech-synthesis-markup.md?tabs=csharp) when you make the API calls to your voice model to generate synthetic speech. SSML is the markup language used to communicate with the text to speech service to convert text into audio. The adjustments you can make include change of pitch, rate, intonation, and pronunciation correction. If the voice model is built with multiple styles, you can also use SSML to switch the styles.
5757

articles/ai-services/speech-service/personal-voice-overview.md

Lines changed: 3 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -38,18 +38,15 @@ The following table summarizes the difference between personal voice and profess
3838

3939
## Try the demo
4040

41-
The demo in Speech Studio is made available to approved customers. You can apply for access [here](https://aka.ms/customneural).
41+
If you have an S0 resource, you can access the personal voice demo in Speech Studio. To use the personal voice API, you can apply for access [here](https://aka.ms/customneural).
4242

4343
1. Go to [Speech Studio](https://aka.ms/speechstudio/)
44+
4445
1. Select the **Personal Voice** card.
4546

4647
:::image type="content" source="./media/personal-voice/personal-voice-home.png" alt-text="Screenshot of the Speech Studio home page with the personal voice card visible." lightbox="./media/personal-voice/personal-voice-home.png":::
4748

48-
1. Select **Request demo access**.
49-
50-
:::image type="content" source="./media/personal-voice/personal-voice-request-access.png" alt-text="Screenshot of the button to request access to personal voice in Speech Studio." lightbox="./media/personal-voice/personal-voice-request-access.png":::
51-
52-
1. After your access is approved, you can record your own voice and try the voice output samples in different languages. The demo includes a subset of the languages supported by personal voice.
49+
1. You can record your own voice and try the voice output samples in different languages. The demo includes a subset of the languages supported by personal voice.
5350

5451
:::image type="content" source="./media/personal-voice/personal-voice-samples.png" alt-text="Screenshot of the personal voice demo experience in Speech Studio." lightbox="./media/personal-voice/personal-voice-samples.png":::
5552

articles/backup/backup-support-matrix-iaas.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
title: Support matrix for Azure VM backups
33
description: Get a summary of support settings and limitations for backing up Azure VMs by using the Azure Backup service.
44
ms.topic: conceptual
5-
ms.date: 04/04/2024
5+
ms.date: 04/19/2024
66
ms.custom: references_regions, linux-related-content
77
ms.reviewer: sharrai
88
ms.service: backup
@@ -180,7 +180,7 @@ Configure standalone Azure VMs in Windows Storage Spaces | Not supported.
180180
[Restore Virtual Machine Scale Sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration) | Supported for the flexible orchestration model to back up and restore a single Azure VM.
181181
Restore with managed identities | Supported for managed Azure VMs. <br><br> Not supported for classic and unmanaged Azure VMs. <br><br> Cross-region restore isn't supported with managed identities. <br><br> Currently, this is available in all Azure public and national cloud regions. <br><br> [Learn more](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities).
182182
<a name="tvm-backup">Back up trusted launch VMs</a> | Backup is supported. <br><br> Backup of trusted launch VMs is supported through [Enhanced policy](backup-azure-vms-enhanced-policy.md). You can enable backup through a [Recovery Services vault](./backup-azure-arm-vms-prepare.md), the [pane for managing a VM](./backup-during-vm-creation.md#start-a-backup-after-creating-the-vm), and the [pane for creating a VM](backup-during-vm-creation.md#create-a-vm-with-backup-configured). <br><br> **Feature details** <br><br> - Backup is supported in all regions where trusted launch VMs are available. <br><br> - Configuration of backups, alerts, and monitoring for trusted launch VMs is supported through the backup center. <br><br> - Migration of an existing [Gen2 VM](../virtual-machines/generation-2.md) (protected with Azure Backup) to a trusted launch VM is currently not supported. [Learn how to create a trusted launch VM](../virtual-machines/trusted-launch-portal.md?tabs=portal#deploy-a-trusted-launch-vm). <br><br> - Item-level restore is supported for the scenarios mentioned [here](backup-support-matrix-iaas.md#support-for-file-level-restore). <br><br> Note that if the trusted launch VM was created by converting a Standard VM, ensure that you remove all the recovery points created using Standard policy before enabling the backup operation for the VM.
183-
[Back up confidential VMs](../confidential-computing/confidential-vm-overview.md) | The backup support is in limited preview. <br><br> Backup is supported only for confidential VMs that have no confidential disk encryption and for confidential VMs that have confidential OS disk encryption through a platform-managed key (PMK). <br><br> Backup is currently not supported for confidential VMs that have confidential OS disk encryption through a customer-managed key (CMK). <br><br> **Feature details** <br><br> - Backup is supported in [all regions where confidential VMs are available](../confidential-computing/confidential-vm-overview.md#regions). <br><br> - Backup is supported only if you're using [Enhanced policy](backup-azure-vms-enhanced-policy.md). You can configure backup through the [pane for creating a VM](backup-azure-arm-vms-prepare.md), the [pane for managing a VM](backup-during-vm-creation.md#start-a-backup-after-creating-the-vm), and the [Recovery Services vault](backup-azure-arm-vms-prepare.md). <br><br> - [Cross-region restore](backup-azure-arm-restore-vms.md#cross-region-restore) and file recovery (item-level restore) for confidential VMs are currently not supported.
183+
[Back up confidential VMs](../confidential-computing/confidential-vm-overview.md) | Unsupported. <br><br> Note that the following limited preview support scenarios are discontinued and currently not available: <br><br> - Backup of Confidential VMs with no confidential disk encryption. <br> - Backup of Confidential VMs with confidential OS disk encryption through a platform-managed key (PMK).
184184

185185
## VM storage support
186186

articles/data-factory/connector-mariadb.md

Lines changed: 12 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.service: data-factory
77
ms.subservice: data-movement
88
ms.custom: synapse
99
ms.topic: conceptual
10-
ms.date: 02/07/2024
10+
ms.date: 04/17/2024
1111
ms.author: jianleishen
1212
---
1313

@@ -294,6 +294,17 @@ Here are steps that help you upgrade your MariaDB driver version:
294294

295295
1. The latest driver version v2 supports more MariaDB versions. For more information, see [Supported capabilities](connector-mariadb.md#supported-capabilities).
296296

297+
## Differences between MariaDB using the recommended driver version and using the legacy driver version
298+
299+
The table below shows the data type mapping differences between MariaDB connector using the recommended driver version and using the legacy driver version.
300+
301+
|MariaDB data type |Interim service data type (using the recommended driver version) |Interim service data type (using the legacy driver version)|
302+
|:---|:---|:---|
303+
|bit(1)| UInt64|Boolean|
304+
|bit(M), M>1|UInt64|Byte[]|
305+
|bool|Boolean|Int16|
306+
|JSON|String|Byte[]|
307+
297308
## Related content
298309

299310
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).

articles/data-factory/connector-mysql.md

Lines changed: 12 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.service: data-factory
77
ms.subservice: data-movement
88
ms.custom: synapse
99
ms.topic: conceptual
10-
ms.date: 02/07/2024
10+
ms.date: 04/17/2024
1111
ms.author: jianleishen
1212
---
1313

@@ -323,6 +323,17 @@ Here are steps that help you upgrade your MySQL driver version:
323323

324324
1. The latest driver version v2 supports more MySQL versions. For more information, see [Supported capabilities](connector-mysql.md#supported-capabilities).
325325

326+
## Differences between MySQL using the recommended driver version and using the legacy driver version
327+
328+
The table below shows the data type mapping differences between MySQL connector using the recommended driver version and using the legacy driver version.
329+
330+
|MySQL data type |Interim service data type (using the recommended driver version) |Interim service data type (using the legacy driver version)|
331+
|:---|:---|:---|
332+
|bit(1)| UInt64|Boolean|
333+
|bit(M), M>1|UInt64|Byte[]|
334+
|bool|Boolean|Int16|
335+
|JSON|String|Byte[]|
336+
326337
## Related content
327338

328339
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).

articles/data-factory/connector-postgresql.md

Lines changed: 14 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.service: data-factory
77
ms.subservice: data-movement
88
ms.custom: synapse
99
ms.topic: conceptual
10-
ms.date: 03/07/2024
10+
ms.date: 04/17/2024
1111
ms.author: jianleishen
1212
---
1313
# Copy data from PostgreSQL using Azure Data Factory or Synapse Analytics
@@ -253,7 +253,7 @@ If you were using `RelationalSource` typed source, it is still supported as-is,
253253

254254
When copying data from PostgreSQL, the following mappings are used from PostgreSQL data types to interim data types used by the service internally. See [Schema and data type mappings](copy-activity-schema-and-type-mapping.md) to learn about how copy activity maps the source schema and data type to the sink.
255255

256-
|PostgreSql data type | Interim service data type | Interim service data type (for the legacy driver version) |
256+
|PostgreSql data type | Interim service data type | Interim service data type for PostgreSQL (legacy) |
257257
|:---|:---|:---|
258258
|`SmallInt`|`Int16`|`Int16`|
259259
|`Integer`|`Int32`|`Int32`|
@@ -317,5 +317,17 @@ Here are steps that help you upgrade your PostgreSQL linked service:
317317

318318
1. The data type mapping for the latest PostgreSQL linked service is different from that for the legacy version. To learn the latest data type mapping, see [Data type mapping for PostgreSQL](#data-type-mapping-for-postgresql).
319319

320+
## Differences between PostgreSQL and PostgreSQL (legacy)
321+
322+
The table below shows the data type mapping differences between PostgreSQL and PostgreSQL (legacy).
323+
324+
|PostgreSQL data type|Interim service data type for PostgreSQL|Interim service data type for PostgreSQL (legacy)|
325+
|:---|:---|:---|
326+
|Money|Decimal|String|
327+
|Timestamp with time zone |DateTime|String|
328+
|Time with time zone |DateTimeOffset|String|
329+
|Interval | TimeSpan|String|
330+
|BigDecimal|Not supported. As an alternative, utilize `to_char()` function to convert BigDecimal to String.|String|
331+
320332
## Related content
321333
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).

articles/hdinsight-aks/flink/flink-catalog-iceberg-hive.md

Lines changed: 18 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
---
22
title: Table API and SQL - Use Iceberg Catalog type with Hive in Apache Flink® on HDInsight on AKS
3-
description: Learn how to create Iceberg Catalog in Apache Flink® on HDInsight on AKS
3+
description: Learn how to create Iceberg Catalog in Apache Flink® on HDInsight on AKS.
44
ms.service: hdinsight-aks
55
ms.topic: how-to
6-
ms.date: 3/28/2024
6+
ms.date: 04/19/2024
77
---
88

99
# Create Iceberg Catalog in Apache Flink® on HDInsight on AKS
@@ -12,15 +12,27 @@ ms.date: 3/28/2024
1212

1313
[Apache Iceberg](https://iceberg.apache.org/) is an open table format for huge analytic datasets. Iceberg adds tables to compute engines like Apache Flink, using a high-performance table format that works just like a SQL table. Apache Iceberg [supports](https://iceberg.apache.org/multi-engine-support/#apache-flink) both Apache Flink’s DataStream API and Table API.
1414

15-
In this article, we learn how to use Iceberg Table managed in Hive catalog, with Apache Flink on HDInsight on AKS cluster
15+
In this article, we learn how to use Iceberg Table managed in Hive catalog, with Apache Flink on HDInsight on AKS cluster.
1616

1717
## Prerequisites
1818
- You're required to have an operational Flink cluster with secure shell, learn how to [create a cluster](../flink/flink-create-cluster-portal.md)
1919
- Refer this article on how to use CLI from [Secure Shell](./flink-web-ssh-on-portal-to-flink-sql.md) on Azure portal.
2020

2121
### Add dependencies
2222

23-
Once you launch the Secure Shell (SSH), let us start downloading the dependencies required to the SSH node, to illustrate the Iceberg table managed in Hive catalog.
23+
**Script actions**
24+
25+
1. Upload hadoop-hdfs-client and iceberg-flink connector jar into Flink cluster Job Manager and Task Manager.
26+
27+
1. Go to Script actions on Cluster Azure portal.
28+
29+
1. Upload [hadoop-hdfs-client_jar](https://hdiconfigactions2.blob.core.windows.net/flink-script-action/hudi-sa-test.sh)
30+
31+
:::image type="content" source="./media/flink-catalog-iceberg-hive/add-script-action.png" alt-text="Screenshot showing how to add script action.":::
32+
33+
:::image type="content" source="./media/flink-catalog-iceberg-hive/script-action-successful.png" alt-text="Screenshot showing script action added successfully.":::
34+
35+
1. Once you launch the Secure Shell (SSH), let us start downloading the dependencies required to the SSH node, to illustrate the Iceberg table managed in Hive catalog.
2436

2537
```
2638
wget https://repo1.maven.org/maven2/org/apache/iceberg/iceberg-flink-runtime-1.17/1.4.0/iceberg-flink-runtime-1.17-1.4.0.jar -P $FLINK_HOME/lib
@@ -36,7 +48,7 @@ A detailed explanation is given on how to get started with Flink SQL Client usin
3648
```
3749
### Create Iceberg Table managed in Hive catalog
3850

39-
With the following steps, we illustrate how you can create Flink-Iceberg Catalog using Hive catalog
51+
With the following steps, we illustrate how you can create Flink-Iceberg catalog using Hive catalog.
4052

4153
```sql
4254
CREATE CATALOG hive_catalog WITH (
@@ -85,7 +97,7 @@ ADD JAR '/opt/flink-webssh/lib/parquet-column-1.12.2.jar';
8597

8698
#### Output of the Iceberg Table
8799

88-
You can view the Iceberg Table output on the ABFS container
100+
You can view the Iceberg Table output on the ABFS container.
89101

90102
:::image type="content" source="./media/flink-catalog-iceberg-hive/flink-catalog-iceberg-hive-output.png" alt-text="Screenshot showing output of the Iceberg table in ABFS.":::
91103

156 KB
Loading
145 KB
Loading

articles/iot-edge/how-to-create-virtual-switch.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -134,7 +134,7 @@ The switch is now created. Next, you'll set up the DNS.
134134
1. Assign the **NAT** and **gateway IP** addresses you created in the earlier section to the DHCP server, and restart the server to load the configuration. The first command should produce no output, but restarting the DHCP server should output the same warning messages that you received when you did so in the third step of this section.
135135
136136
```powershell
137-
Set-DhcpServerV4OptionValue -ScopeID {natIp} -Router {gatewayIp}
137+
Set-DhcpServerV4OptionValue -ScopeID {startIp} -Router {gatewayIp}
138138
Restart-service dhcpserver
139139
```
140140

0 commit comments

Comments
 (0)