Skip to content

Commit 49e059f

Browse files
authored
Merge pull request #296402 from MicrosoftDocs/main
3/17/2025 11:00 AM IST Publish
2 parents 925fe3b + 01ae4f5 commit 49e059f

File tree

4 files changed

+68
-39
lines changed

4 files changed

+68
-39
lines changed

articles/azure-sql-edge/deploy-onnx.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,9 +2,9 @@
22
title: Deploy and make predictions with ONNX
33
titleSuffix: SQL machine learning
44
description: Learn how to train a model, convert it to ONNX, deploy it to Azure SQL Edge, and then run native PREDICT on data using the uploaded ONNX model.
5-
author: WilliamDAssafMSFT
6-
ms.author: wiassaf
7-
ms.reviewer: hudequei, randolphwest
5+
author: rwestMSFT
6+
ms.author: randolphwest
7+
ms.reviewer: hudequei, vanto
88
ms.date: 09/21/2024
99
ms.service: sql
1010
ms.subservice: machine-learning

articles/azure-sql-edge/onnx-overview.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
---
22
title: Machine learning and AI with ONNX in Azure SQL Edge
33
description: Machine learning in Azure SQL Edge supports models in the Open Neural Network Exchange (ONNX) format. ONNX is an open format you can use to interchange models between various machine learning frameworks and tools.
4-
author: WilliamDAssafMSFT
5-
ms.author: wiassaf
6-
ms.reviewer: hudequei, randolphwest
4+
author: rwestMSFT
5+
ms.author: randolphwest
6+
ms.reviewer: hudequei, vanto, kendalv
77
ms.date: 09/21/2024
88
ms.service: azure-sql-edge
99
ms.subservice: machine-learning

articles/data-factory/connector-cassandra.md

Lines changed: 36 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: jianleishen
66
ms.subservice: data-movement
77
ms.custom: synapse
88
ms.topic: conceptual
9-
ms.date: 02/28/2025
9+
ms.date: 03/07/2025
1010
ms.author: jianleishen
1111
---
1212
# Copy data from Cassandra using Azure Data Factory or Synapse Analytics
@@ -30,7 +30,7 @@ For a list of data stores that are supported as sources/sinks, see the [Supporte
3030

3131
Specifically, this Cassandra connector supports:
3232

33-
- Cassandra **versions 3.x.x and 4.x.x** for version 2.0.
33+
- Cassandra **versions 3.x.x and 4.x.x** for version 2.0 (Preview).
3434
- Cassandra **versions 2.x and 3.x** for version 1.0.
3535
- Copying data using **Basic** or **Anonymous** authentication.
3636

@@ -81,7 +81,7 @@ The following properties are supported for Cassandra linked service:
8181
| Property | Description | Required |
8282
|:--- |:--- |:--- |
8383
| type |The type property must be set to: **Cassandra** |Yes |
84-
| version | The version that you specify. The value is `2.0`. | Yes for version 2.0, not supported for version 1.0. |
84+
| version | The version that you specify. The value is `2.0`. | Yes for version 2.0 (Preview), not supported for version 1.0. |
8585
| host |One or more IP addresses or host names of Cassandra servers.<br/>Specify a comma-separated list of IP addresses or host names to connect to all servers concurrently. |Yes |
8686
| port |The TCP port that the Cassandra server uses to listen for client connections. |No (default is 9042) |
8787
| authenticationType | Type of authentication used to connect to the Cassandra database.<br/>Allowed values are: **Basic**, and **Anonymous**. |Yes |
@@ -233,20 +233,23 @@ If you use version 1.0 to copy data from Cassandra, set the source type in the c
233233

234234
When copying data from Cassandra, the following mappings are used from Cassandra data types to interim data types used internally within the service. See [Schema and data type mappings](copy-activity-schema-and-type-mapping.md) to learn about how copy activity maps the source schema and data type to the sink.
235235

236-
| Cassandra data type | Interim service data type (for version 2.0) | Interim service data type (for version 1.0) |
236+
| Cassandra data type | Interim service data type (for version 2.0 (Preview)) | Interim service data type (for version 1.0) |
237237
|:--- |:--- |:--- |
238238
| ASCII |String |String |
239239
| BIGINT |Int64 |Int64 |
240240
| BLOB |Byte[] |Byte[] |
241241
| BOOLEAN |Boolean |Boolean |
242+
| DATE | DateTime | DateTime |
242243
| DECIMAL |Decimal |Decimal |
243244
| DOUBLE |Double |Double |
244245
| FLOAT |Single |Single |
245246
| INET |String |String |
246247
| INT |Int32 |Int32 |
248+
| SMALLINT | Short | Int16 |
247249
| TEXT |String |String |
248250
| TIMESTAMP |DateTime |DateTime |
249251
| TIMEUUID |Guid |Guid |
252+
| TINYINT | SByte | Int16 |
250253
| UUID |Guid |Guid |
251254
| VARCHAR |String |String |
252255
| VARINT |Decimal |Decimal |
@@ -259,6 +262,26 @@ When copying data from Cassandra, the following mappings are used from Cassandra
259262
> The length of Binary Column and String Column lengths cannot be greater than 4000.
260263
>
261264
265+
## Work with collections when using version 2.0 (Preview)
266+
267+
When using version 2.0 (Preview) to copy data from your Cassandra database, no virtual tables for collection types are created. You can copy a source table to the sink in its original type in JSON format.
268+
269+
### Example
270+
271+
For example, the following "ExampleTable" is a Cassandra database table that contains an integer primary key column named "pk_int", a text column named value, a list column, a map column, and a set column (named "StringSet").
272+
273+
| pk_int | Value | List | Map | StringSet |
274+
| --- | --- | --- | --- | --- |
275+
| 1 |"sample value 1" |["1", "2", "3"] |{"S1": "a", "S2": "b"} |{"A", "B", "C"} |
276+
| 3 |"sample value 3" |["100", "101", "102", "105"] |{"S1": "t"} |{"A", "E"} |
277+
278+
The data can be directly read from a source table, and the column values are preserved in their original types in JSON format, as illustrated in the following table:
279+
280+
| pk_int | Value | List | Map | StringSet |
281+
| --- | --- | --- | --- | --- |
282+
| 1 |"sample value 1" |["1", "2", "3"] |{"S1": "a", "S2": "b"} |["A", "B", "C"] |
283+
| 3 |"sample value 3" |["100", "101", "102", "105"] |{"S1": "t"} |["A", "E"] |
284+
262285
## Work with collections using virtual table when using version 1.0
263286

264287
The service uses a built-in ODBC driver to connect to and copy data from your Cassandra database. For collection types including map, set and list, the driver renormalizes the data into corresponding virtual tables. Specifically, if a table contains any collection columns, the driver generates the following virtual tables:
@@ -324,25 +347,26 @@ The following tables show the virtual tables that renormalize the data from the
324347

325348
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
326349

327-
## Differences between Cassandra version 2.0 and version 1.0
350+
## Differences between Cassandra version 2.0 (Preview) and version 1.0
328351

329-
The Cassandra connector version 2.0 offers new functionalities and is compatible with most features of version 1.0. The table below shows the feature differences between version 2.0 and version 1.0.
352+
The Cassandra connector version 2.0 (Preview) offers new functionalities and is compatible with most features of version 1.0. The table below shows the feature differences between version 2.0 (Preview) and version 1.0.
330353

331-
| version 2.0 | version 1.0 |
354+
| Version 2.0 (Preview) | Version 1.0 |
332355
| --- | --- |
333356
| Support CQL query. | Support SQL-92 query or CQL query. |
334-
| Support specifying `keyspace` and `tableName` separately in Cassandra dataset. | Support editing `keyspace` when you select enter manually table name in Cassandra dataset. |
335-
| There is no virtual tables for collection types. | For collection types (map, set, list, etc.), refer to [Work with Cassandra collection types using virtual table when using version 1.0](#work-with-collections-using-virtual-table-when-using-version-10) section. |
357+
| Support specifying `keyspace` and `tableName` separately in dataset. | Support editing `keyspace` when you select enter manually table name in dataset. |
358+
| No virtual tables are created for collection types. For more information, see [Work with collections when using version 2.0 (Preview)](#work-with-collections-when-using-version-20-preview). | Virtual tables are created for collection types. For more information, see [Work with Cassandra collection types using virtual table when using version 1.0](#work-with-collections-using-virtual-table-when-using-version-10). |
359+
| The following mappings are used from Cassandra data types to interim service data type. <br><br> SMALLINT -> Short <br> TINYINT -> SByte | The following mappings are used from Cassandra data types to interim service data type. <br><br> SMALLINT -> Int16 <br> TINYINT -> Int16 |
336360

337361
## Upgrade the Cassandra connector
338362

339363
Here are steps that help you upgrade the Cassandra connector:
340364

341-
1. In **Edit linked service** page, select **2.0 (Preview)** under **Version** and configure the linked service by referring to [Linked service properties](#linked-service-properties).
365+
1. In **Edit linked service** page, select version 2.0 (Preview) and configure the linked service by referring to [Linked service properties](#linked-service-properties).
342366

343-
2. If you use `query` in the copy activity source for version 2.0, see [Cassandra as source](#cassandra-as-source).
367+
2. In version 2.0 (Preview), the `query` in the copy activity source supports only CQL query, not SQL-92 query. For more information, see [Cassandra as source](#cassandra-as-source).
344368

345-
3. The data type mapping for version 2.0 is different from that for version 1.0. To learn the latest data type mapping, see [Data type mapping for Cassandra](#data-type-mapping-for-cassandra).
369+
3. The data type mapping for version 2.0 (Preview) is different from that for version 1.0. To learn the latest data type mapping, see [Data type mapping for Cassandra](#data-type-mapping-for-cassandra).
346370

347371
## Related content
348372
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).

articles/logic-apps/set-up-standard-workflows-hybrid-deployment-requirements.md

Lines changed: 26 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ ms.service: azure-logic-apps
66
ms.suite: integration
77
ms.reviewer: estfan, azla
88
ms.topic: how-to
9-
ms.date: 03/10/2025
9+
ms.date: 03/13/2025
1010
# Customer intent: As a developer, I need to set up the requirements to host and run Standard logic app workflows on infrastructure that my organization owns, which can include on-premises systems, private clouds, and public clouds.
1111
---
1212

@@ -157,9 +157,10 @@ Your Kubernetes cluster requires inbound and outbound connectivity with the [SQL
157157
az login
158158
az account set --subscription $SUBSCRIPTION
159159
az provider register --namespace Microsoft.KubernetesConfiguration --wait
160+
az provider register --namespace Microsoft.Kubernetes --wait
160161
az extension add --name k8s-extension --upgrade --yes
161-
az group create
162-
--name $AKS_CLUSTER_GROUP_NAME
162+
az group create \
163+
--name $AKS_CLUSTER_GROUP_NAME \
163164
--location $LOCATION
164165
az aks create \
165166
--resource-group $AKS_CLUSTER_GROUP_NAME \
@@ -179,7 +180,10 @@ Your Kubernetes cluster requires inbound and outbound connectivity with the [SQL
179180
For more information, see the following resources:
180181

181182
- [Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using Azure CLI](/azure/aks/learn/quick-kubernetes-deploy-cli)
183+
- [**az extension add**](/cli/azure/extension#az-extension-add)
184+
- [Register the required namespaces](/azure/container-apps/azure-arc-enable-cluster?tabs=azure-cli#setup)
182185
- [**az account set**](/cli/azure/account#az-account-set)
186+
- [**az provider register**](/cli/azure/provider#az-provider-register)
183187
- [**az group create**](/cli/azure/group#az-group-create)
184188
- [**az aks create**](/cli/azure/aks#az-aks-create)
185189

@@ -219,6 +223,7 @@ To create your Azure Arc-enabled Kubernetes cluster, connect your Kubernetes clu
219223

220224
```azurecli
221225
az provider register --namespace Microsoft.ExtendedLocation --wait
226+
az provider register --namespace Microsoft.Kubernetes --wait
222227
az provider register --namespace Microsoft.KubernetesConfiguration --wait
223228
az provider register --namespace Microsoft.App --wait
224229
az provider register --namespace Microsoft.OperationalInsights --wait
@@ -243,6 +248,24 @@ To create your Azure Arc-enabled Kubernetes cluster, connect your Kubernetes clu
243248
- [Set-ExecutionPolicy](/powershell/module/microsoft.powershell.security/set-executionpolicy)
244249
- [choco install kubernetes-cli](https://docs.chocolatey.org/en-us/choco/commands/install/)
245250

251+
1. Test your connection to your cluster by getting the [**kubeconfig** file](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/):
252+
253+
```azurecli
254+
az aks get-credentials \
255+
--resource-group $AKS_CLUSTER_GROUP_NAME \
256+
--name $AKS_NAME \
257+
--admin
258+
kubectl get ns
259+
```
260+
261+
By default, the **kubeconfig** file is saved to the path, **~/.kube/config**. This command applies to our example Kubernetes cluster and differs for other kinds of Kubernetes clusters.
262+
263+
For more information, see the following resources:
264+
265+
- [Create connected cluster](../container-apps/azure-arc-enable-cluster.md?tabs=azure-cli#create-a-connected-cluster)
266+
- [**az aks get-credentials**](/cli/azure/aks#az-aks-get-credentials)
267+
- [**kubectl get**](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_get/)
268+
246269
1. Install the Kubernetes package manager named **Helm**:
247270

248271
```azurecli
@@ -280,24 +303,6 @@ To create your Azure Arc-enabled Kubernetes cluster, connect your Kubernetes clu
280303
281304
## Connect your Kubernetes cluster to Azure Arc
282305
283-
1. Test your connection to your cluster by getting the [**kubeconfig** file](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/):
284-
285-
```azurecli
286-
az aks get-credentials \
287-
--resource-group $AKS_CLUSTER_GROUP_NAME \
288-
--name $AKS_NAME \
289-
--admin
290-
kubectl get ns
291-
```
292-
293-
By default, the **kubeconfig** file is saved to the path, **~/.kube/config**. This command applies to our example Kubernetes cluster and differs for other kinds of Kubernetes clusters.
294-
295-
For more information, see the following resources:
296-
297-
- [Create connected cluster](../container-apps/azure-arc-enable-cluster.md?tabs=azure-cli#create-a-connected-cluster)
298-
- [**az aks get-credentials**](/cli/azure/aks#az-aks-get-credentials)
299-
- [**kubectl get**](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_get/)
300-
301306
1. Based on your Kubernetes cluster deployment, set the following environment variable to provide a name to use for the Azure resource group that contains your Azure Arc-enabled cluster and resources:
302307
303308
```azurecli

0 commit comments

Comments
 (0)