You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-app-configuration/use-key-vault-references-dotnet-core.md
+2-4Lines changed: 2 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,7 +21,7 @@ Your application uses the App Configuration client provider to retrieve Key Vaul
21
21
22
22
Your application is responsible for authenticating properly to both App Configuration and Key Vault. The two services don't communicate directly.
23
23
24
-
This tutorial shows you how to implement Key Vault references in your code. It builds on the web app introduced in the quickstarts. Before you continue, finish [Create an ASP.NET Core app with App Configuration](./quickstart-aspnet-core-app.md) first.
24
+
This tutorial shows you how to implement Key Vault references in your code. It builds on the web app introduced in the ASP.NET core quickstart listed in the prerequisites below. Before you continue, complete this [quickstart](./quickstart-aspnet-core-app.md).
25
25
26
26
You can use any code editor to do the steps in this tutorial. For example, [Visual Studio Code](https://code.visualstudio.com/) is a cross-platform code editor that's available for the Windows, macOS, and Linux operating systems.
27
27
@@ -33,9 +33,7 @@ In this tutorial, you learn how to:
33
33
34
34
## Prerequisites
35
35
36
-
Before you start this tutorial, install the [.NET SDK 6.0 or later](https://dotnet.microsoft.com/download).
Copy file name to clipboardExpand all lines: articles/azure-arc/vmware-vsphere/enable-virtual-hardware.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
title: Enable additional capabilities on Arc-enabled Server machines by linking to vCenter
3
3
description: Enable additional capabilities on Arc-enabled Server machines by linking to vCenter.
4
4
ms.topic: how-to
5
-
ms.date: 03/13/2024
5
+
ms.date: 07/04/2024
6
6
ms.service: azure-arc
7
7
ms.subservice: azure-arc-vmware-vsphere
8
8
ms.custom: devx-track-azurecli
@@ -48,22 +48,22 @@ Follow these steps [here](./quick-start-connect-vcenter-to-arc-using-script.md)
48
48
49
49
Use the following az commands to link Arc-enabled Server machines to vCenter at scale.
50
50
51
-
**Create VMware resources from the specified Arc for Server machines in the vCenter**
51
+
**Create VMware resource from the specified Arc for Server machine in the vCenter**
52
52
53
53
```azurecli-interactive
54
-
az connectedvmware vm create-from-machines --resource-group contoso-rg --name contoso-vm --vcenter-id /subscriptions/fedcba98-7654-3210-0123-456789abcdef/resourceGroups/contoso-rg-2/providers/Microsoft.HybridCompute/vcenters/contoso-vcenter
54
+
az connectedvmware vm create-from-machines --resource-group contoso-rg --name contoso-vm --vcenter-id /subscriptions/999998ee-cd13-9999-b9d4-55ca5c25496d/resourceGroups/allhands-demo/providers/microsoft.connectedvmwarevsphere/VCenters/ContosovCentervcenters/contoso-vcenter
55
55
```
56
56
57
57
**Create VMware resources from all Arc for Server machines in the specified resource group belonging to that vCenter**
58
58
59
59
```azurecli-interactive
60
-
az connectedvmware vm create-from-machines --resource-group contoso-rg --vcenter-id /subscriptions/fedcba98-7654-3210-0123-456789abcdef/resourceGroups/contoso-rg-2/providers/Microsoft.HybridCompute/vcenters/contoso-vcenter
60
+
az connectedvmware vm create-from-machines --resource-group contoso-rg --vcenter-id /subscriptions/999998ee-cd13-9999-b9d4-55ca5c25496d/resourceGroups/allhands-demo/providers/microsoft.connectedvmwarevsphere/VCenters/ContosovCentervcenters/contoso-vcenter
61
61
```
62
62
63
63
**Create VMware resources from all Arc for Server machines in the specified subscription belonging to that vCenter**
64
64
65
65
```azurecli-interactive
66
-
az connectedvmware vm create-from-machines --subscription contoso-sub --vcenter-id /subscriptions/fedcba98-7654-3210-0123-456789abcdef/resourceGroups/contoso-rg-2/providers/Microsoft.HybridCompute/vcenters/contoso-vcenter
66
+
az connectedvmware vm create-from-machines --subscription contoso-sub --vcenter-id /subscriptions/999998ee-cd13-9999-b9d4-55ca5c25496d/resourceGroups/allhands-demo/providers/microsoft.connectedvmwarevsphere/VCenters/ContosovCentervcenters/contoso-vcenter
> Mirroring in Microsoft Fabric is now available in preview for NoSql API. This feature provides all the capabilities of Azure Synapse Link with better analytical performance, ability to unify your data estate with Fabric OneLake and open access to your data in OneLake with Delta Parquet format. If you are considering Azure Synapse Link, we recommend that you try mirroring to assess overall fit for your organization. To get started with mirroring, click [here](/fabric/database/mirrored-database/azure-cosmos-db?context=/azure/cosmos-db/context/context).
19
+
20
+
To get started with Azure Synapse Link, please visit [“Getting started with Azure Synapse Link”](synapse-link.md)
21
+
17
22
Azure Cosmos DB analytical store is a fully isolated column store for enabling large-scale analytics against operational data in your Azure Cosmos DB, without any impact to your transactional workloads.
18
23
19
24
Azure Cosmos DB transactional store is schema-agnostic, and it allows you to iterate on your transactional applications without having to deal with schema or index management. In contrast to this, Azure Cosmos DB analytical store is schematized to optimize for analytical query performance. This article describes in detailed about analytical storage.
@@ -38,13 +43,13 @@ When you enable analytical store on an Azure Cosmos DB container, a new column-s
38
43
39
44
## Column store for analytical workloads on operational data
40
45
41
-
Analytical workloads typically involve aggregations and sequential scans of selected fields. By storing the data in a column-major order, the analytical store allows a group of values for each field to be serialized together. This format reduces the IOPS required to scan or compute statistics over specific fields. It dramatically improves the query response times for scans over large data sets.
46
+
Analytical workloads typically involve aggregations and sequential scans of selected fields. The data analytical store is stored in a column-major order, allowing values of each field to be serialized together, where applicable. This format reduces the IOPS required to scan or compute statistics over specific fields. It dramatically improves the query response times for scans over large data sets.
42
47
43
48
For example, if your operational tables are in the following format:
The row store persists the above data in a serialized format, per row, on the disk. This format allows for faster transactional reads, writes, and operational queries, such as, "Return information about Product1". However, as the dataset grows large and if you want to run complex analytical queries on the data it can be expensive. For example, if you want to get "the sales trends for a product under the category named 'Equipment' across different business units and months", you need to run a complex query. Large scans on this dataset can get expensive in terms of provisioned throughput and can also impact the performance of the transactional workloads powering your real-time applications and services.
52
+
The row store persists the above data in a serialized format, per row, on the disk. This format allows for faster transactional reads, writes, and operational queries, such as, "Return information about Product 1". However, as the dataset grows large and if you want to run complex analytical queries on the data it can be expensive. For example, if you want to get "the sales trends for a product under the category named 'Equipment' across different business units and months", you need to run a complex query. Large scans on this dataset can get expensive in terms of provisioned throughput and can also impact the performance of the transactional workloads powering your real-time applications and services.
48
53
49
54
Analytical store, which is a column store, is better suited for such queries because it serializes similar fields of data together and reduces the disk IOPS.
50
55
@@ -74,7 +79,7 @@ At the end of each execution of the automatic sync process, your transactional d
74
79
75
80
## Scalability & elasticity
76
81
77
-
By using horizontal partitioning, Azure Cosmos DB transactional store can elastically scale the storage and throughput without any downtime. Horizontal partitioning in the transactional store provides scalability & elasticity in auto-sync to ensure data is synced to the analytical store in near real time. The data sync happens regardless of the transactional traffic throughput, whether it's 1000 operations/sec or 1 million operations/sec, and it doesn't impact the provisioned throughput in the transactional store.
82
+
Azure Cosmos DB transactional store uses horizontal partitioning to elastically scale the storage and throughput without any downtime. Horizontal partitioning in the transactional store provides scalability & elasticity in auto-sync to ensure data is synced to the analytical store in near real time. The data sync happens regardless of the transactional traffic throughput, whether it's 1000 operations/sec or 1 million operations/sec, and it doesn't impact the provisioned throughput in the transactional store.
WHERE timestamp is not null or timestamp_utc is not null
421
426
```
422
427
423
-
Starting from the query above, customers can implement transformations using `cast`, `convert` or any other T-SQL function to manipulate your data. Customers can also hide complex datatype structures by using views.
428
+
You can implement transformations using `cast`, `convert` or any other T-SQL function to manipulate your data. You can also hide complex datatype structures by using views.
424
429
425
430
```SQL
426
431
create view MyView as
@@ -448,11 +453,11 @@ WHERE timestamp_string is not null
448
453
```
449
454
450
455
451
-
##### Working with the MongoDB `_id` field
456
+
##### Working with MongoDB `_id` field
452
457
453
-
the MongoDB `_id` field is fundamental to every collection in MongoDB and originally has a hexadecimal representation. As you can see in the table above, full fidelity schema will preserve its characteristics, creating a challenge for its visualization in Azure Synapse Analytics. For correct visualization, you must convert the `_id` datatype as below:
458
+
MongoDB `_id` field is fundamental to every collection in MongoDB and originally has a hexadecimal representation. As you can see in the table above, full fidelity schema will preserve its characteristics, creating a challenge for its visualization in Azure Synapse Analytics. For correct visualization, you must convert the `_id` datatype as below:
454
459
455
-
###### Working with the MongoDB `_id` field in Spark
460
+
###### Working with MongoDB `_id` field in Spark
456
461
457
462
The example below works on Spark 2.x and 3.x versions:
458
463
@@ -473,7 +478,7 @@ val dfConverted = df.withColumn("objectId", col("_id.objectId")).withColumn("con
473
478
display(dfConverted)
474
479
```
475
480
476
-
###### Working with the MongoDB `_id` field in SQL
481
+
###### Working with MongoDB `_id` field in SQL
477
482
478
483
```SQL
479
484
SELECT TOP 100 id=CAST(_id as VARBINARY(1000))
@@ -489,7 +494,7 @@ It's possible to use full fidelity Schema for API for NoSQL accounts, instead of
489
494
* Currently, if you enable Synapse Link in your NoSQL API account using the Azure portal, it will be enabled as well-defined schema.
490
495
* Currently, if you want to use full fidelity schema with NoSQL or Gremlin API accounts, you have to set it at account level in the same CLI or PowerShell command that will enable Synapse Link at account level.
491
496
* Currently Azure Cosmos DB for MongoDB isn't compatible with this possibility of changing the schema representation. All MongoDB accounts have full fidelity schema representation type.
492
-
* Full Fidelity schema data types map mentioned above isn't valid for NoSQL API accounts, that use JSON datatypes. As an example, `float`and`integer`values are represented as`num`in analytical store.
497
+
* Full Fidelity schema data types map mentioned above isn't valid for NoSQL API accounts that use JSON datatypes. As an example, `float`and`integer`values are represented as`num`in analytical store.
493
498
* It's not possible to reset the schema representation type, from well-defined to full fidelity or vice-versa.
494
499
* Currently, containers schemas in analytical store are defined when the container is created, even if Synapse Link has not been enabled in the database account.
495
500
* Containers or graphs created before Synapse Link was enabled with full fidelity schema at account level will have well-defined schema.
@@ -551,7 +556,7 @@ Data tiering refers to the separation of data between storage infrastructures op
551
556
After the analytical store is enabled, based on the data retention needs of the transactional workloads, you can configure `transactional TTL` property to have records automatically deleted from the transactional store after a certain time period. Similarly, the `analytical TTL` allows you to manage the lifecycle of data retained in the analytical store, independent from the transactional store. By enabling analytical store and configuring transactional and analytical `TTL` properties, you can seamlessly tier and define the data retention period for the two stores.
552
557
553
558
> [!NOTE]
554
-
> When `analytical TTL` is bigger than `transactional TTL`, your container will have data that only exists in analytical store. This data is read only and currently we don't support document level `TTL`in analytical store. If your container data may need an updateor a delete at some pointintimein the future, don't use `analytical TTL` bigger than `transactional TTL`. This capability is recommended for data that won't need updates or deletes in the future.
559
+
> When `analytical TTL` is set to a value larger than `transactional TTL` value, your container will have data that only exists in analytical store. This data is read only and currently we don't support document level `TTL`in analytical store. If your container data may need an updateor a delete at some pointintimein the future, don't use `analytical TTL` bigger than `transactional TTL`. This capability is recommended for data that won't need updates or deletes in the future.
555
560
556
561
> [!NOTE]
557
562
> If your scenario doesn't demand physical deletes, you can adopt a logical delete/update approach. Insert in transactional store another version of the same document that only exists in analytical store but needs a logical delete/update. Maybe with a flag indicating that it's a deleteor an update of an expired document. Both versions of the same document will co-exist in analytical store, and your application should only consider the last one.
@@ -562,9 +567,9 @@ After the analytical store is enabled, based on the data retention needs of the
562
567
Analytical store relies on Azure Storage and offers the following protection against physical failure:
563
568
564
569
* By default, Azure Cosmos DB database accounts allocate analytical store in Locally Redundant Storage (LRS) accounts. LRS provides at least 99.999999999% (11 nines) durability of objects over a given year.
565
-
* If any geo-region of the database account is configured for zone-redundancy, it is allocated in Zone-redundant Storage (ZRS) accounts. Customers need to enable Availability Zones on a region of their Azure Cosmos DB database account to have analytical data of that region stored in Zone-redundant Storage. ZRS offers durability for storage resources of at least 99.9999999999% (129's) over a given year.
570
+
* If any geo-region of the database account is configured for zone-redundancy, it is allocated in Zone-redundant Storage (ZRS) accounts. You need to enable Availability Zones on a region of their Azure Cosmos DB database account to have analytical data of that region stored in Zone-redundant Storage. ZRS offers durability for storage resources of at least 99.9999999999% (129's) over a given year.
566
571
567
-
For more information about Azure Storage durability, click [here](/azure/storage/common/storage-redundancy).
572
+
For more information about Azure Storage durability, see [this link.](/azure/storage/common/storage-redundancy)
568
573
569
574
## Backup
570
575
@@ -577,7 +582,7 @@ Synapse Link, and analytical store by consequence, has different compatibility l
577
582
578
583
* Periodic backup mode is fully compatible with Synapse Link and these 2 features can be used in the same database account.
579
584
* Synapse Link for database accounts using continuous backup mode is GA.
580
-
* Continuous backup mode for Synapse Link enabled accounts is in public preview. Currently, customers that disabled Synapse Link from containers can't migrate to continuous backup.
585
+
* Continuous backup mode for Synapse Link enabled accounts is in public preview. Currently, you can't migrate to continuous backup if you disabled Synapse Link on any of your collections in a Cosmos DB account.
581
586
582
587
### Backup policies
583
588
@@ -640,7 +645,7 @@ Analytical store partitioning is completely independent of partitioning in
640
645
641
646
The analytical store is optimized to provide scalability, elasticity, and performance for analytical workloads without any dependency on the compute run-times. The storage technology is self-managed to optimize your analytics workloads without manual efforts.
642
647
643
-
By decoupling the analytical storage system from the analytical compute system, data in Azure Cosmos DB analytical store can be queried simultaneously from the different analytics runtimes supported by Azure Synapse Analytics.As of today, Azure Synapse Analytics supports Apache Spark and serverless SQL pool with Azure Cosmos DB analytical store.
648
+
Data in Azure Cosmos DB analytical store can be queried simultaneously from the different analytics runtimes supported by Azure Synapse Analytics. Azure Synapse Analytics supports Apache Spark and serverless SQL pool with Azure Cosmos DB analytical store.
644
649
645
650
> [!NOTE]
646
651
> You can only read from analytical store using Azure Synapse Analytics runtimes. And the opposite is also true, Azure Synapse Analytics runtimes can only read from analytical store. Only the auto-sync process can change data in analytical store. You can write data back to Azure Cosmos DB transactional store using Azure Synapse Analytics Spark pool, using the built-in Azure Cosmos DB OLTP SDK.
0 commit comments