Skip to content

Commit 3005d9c

Browse files
authored
Merge pull request #92412 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to master to sync with https://github.com/Microsoft/azure-docs (branch master)
2 parents b339f69 + 06fd227 commit 3005d9c

File tree

7 files changed

+24
-18
lines changed

7 files changed

+24
-18
lines changed

articles/azure-cache-for-redis/cache-how-to-premium-clustering.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -120,9 +120,9 @@ For sample code on working with clustering and locating keys in the same shard w
120120
The largest premium cache size is 120 GB. You can create up to 10 shards giving you a maximum size of 1.2TB GB. If you need a larger size you can [request more](mailto:[email protected]?subject=Redis%20Cache%20quota%20increase). For more information, see [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/cache/).
121121

122122
### Do all Redis clients support clustering?
123-
At the present time not all clients support Redis clustering. StackExchange.Redis is one that does support for it. For more information on other clients, see the [Playing with the cluster](https://redis.io/topics/cluster-tutorial#playing-with-the-cluster) section of the [Redis cluster tutorial](https://redis.io/topics/cluster-tutorial).
123+
Not all clients support Redis clustering! Please check the documentation for the library you are using, to verify you are using a library and version which support clustering. StackExchange.Redis is one library that does support clustering, in its newer versions. For more information on other clients, see the [Playing with the cluster](https://redis.io/topics/cluster-tutorial#playing-with-the-cluster) section of the [Redis cluster tutorial](https://redis.io/topics/cluster-tutorial).
124124

125-
The Redis clustering protocol requires each client to connect to each shard directly in clustering mode. Attempting to use a client that doesn't support clustering will likely result in a lot of [MOVED redirection exceptions](https://redis.io/topics/cluster-spec#moved-redirection).
125+
The Redis clustering protocol requires each client to connect to each shard directly in clustering mode, and also defines new error responses such as 'MOVED' na 'CROSSSLOTS'. Attempting to use a client that doesn't support clustering with a cluster mode cache can result in a lot of [MOVED redirection exceptions](https://redis.io/topics/cluster-spec#moved-redirection), or just break your application, if you are doing cross-slot multi-key requests.
126126

127127
> [!NOTE]
128128
> If you are using StackExchange.Redis as your client, ensure you are using the latest version of [StackExchange.Redis](https://www.nuget.org/packages/StackExchange.Redis/) 1.0.481 or later for clustering to work correctly. If you have any issues with move exceptions, see [move exceptions](#move-exceptions) for more information.
@@ -146,7 +146,10 @@ For non-ssl, use the following commands.
146146
For ssl, replace `1300N` with `1500N`.
147147

148148
### Can I configure clustering for a previously created cache?
149-
Currently you can only enable clustering when you create a cache. You can change the cluster size after the cache is created, but you can't add clustering to a premium cache or remove clustering from a premium cache after the cache is created. A premium cache with clustering enabled and only one shard is different than a premium cache of the same size with no clustering.
149+
Yes. First ensure that your cache is premium, by scaling if is not. Next, you should be able to see the cluster configuration options, including an option to enable clsuter. You can change the cluster size after the cache is created, or after you have enabled clustering for the first time.
150+
151+
>[!IMPORTANT]
152+
>You can't undo enabling clustering. And a cache with clustering enabled and only one shard behaves *differently* than a cache of the same size with *no* clustering.
150153
151154
### Can I configure clustering for a basic or standard cache?
152155
Clustering is only available for premium caches.

articles/azure-databricks/databricks-connect-to-data-sources.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -26,9 +26,9 @@ The following list provides the data sources in Azure that you can use with Azur
2626
- [Azure SQL database](https://docs.azuredatabricks.net/spark/latest/data-sources/sql-databases.html)
2727

2828
This link provides the DataFrame API for connecting to SQL databases using JDBC and how to control the parallelism of reads through the JDBC interface. This topic provides detailed examples using the Scala API, with abbreviated Python and Spark SQL examples at the end.
29-
- [Azure Data Lake Store](https://docs.azuredatabricks.net/spark/latest/data-sources/azure/azure-datalake-gen2.html)
29+
- [Azure Data Lake Storage](https://docs.azuredatabricks.net/spark/latest/data-sources/azure/azure-datalake-gen2.html)
3030

31-
This link provides examples on how to use the Azure Active Directory service principal to authenticate with Data Lake Store. It also provides instructions on how to access the data in Data Lake Store from Azure Databricks.
31+
This link provides examples on how to use the Azure Active Directory service principal to authenticate with Azure Data Lake Storage. It also provides instructions on how to access the data in Azure Data Lake Storage from Azure Databricks.
3232

3333
- [Azure Blob Storage](https://docs.azuredatabricks.net/spark/latest/data-sources/azure/azure-storage.html)
3434

articles/azure-databricks/frequently-asked-questions-databricks.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -21,14 +21,14 @@ Yes. You can use Azure Key Vault to store keys/secrets for use with Azure Databr
2121
## Can I use Azure Virtual Networks with Databricks?
2222
Yes. You can use an Azure Virtual Network (VNET) with Azure Databricks. For more information, see [Deploying Azure Databricks in your Azure Virtual Network](https://docs.azuredatabricks.net/administration-guide/cloud-configurations/azure/vnet-inject.html).
2323

24-
## How do I access Azure Data Lake Store from a notebook?
24+
## How do I access Azure Data Lake Storage from a notebook?
2525

2626
Follow these steps:
2727
1. In Azure Active Directory (Azure AD), provision a service principal, and record its key.
28-
1. Assign the necessary permissions to the service principal in Data Lake Store.
29-
1. To access a file in Data Lake Store, use the service principal credentials in Notebook.
28+
1. Assign the necessary permissions to the service principal in Data Lake Storage.
29+
1. To access a file in Data Lake Storage, use the service principal credentials in Notebook.
3030

31-
For more information, see [Use Data Lake Store with Azure Databricks](https://docs.azuredatabricks.net/spark/latest/data-sources/azure/azure-datalake.html).
31+
For more information, see [Use Azure Data Lake Storage with Azure Databricks](https://docs.azuredatabricks.net/spark/latest/data-sources/azure/azure-datalake.html).
3232

3333
## Fix common problems
3434

articles/azure-databricks/howto-regional-disaster-recovery.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ This article describes a disaster recovery architecture useful for Azure Databri
1515

1616
## Azure Databricks architecture
1717

18-
At a high level, when you create an Azure Databricks workspace from the Azure portal, a [managed appliance](../managed-applications/overview.md) is deployed as an Azure resource in your subscription, in the chosen Azure region (for example, West US). This appliance is deployed in an [Azure Virtual Network](../virtual-network/virtual-networks-overview.md) with a [Network Security Group](../virtual-network/manage-network-security-group.md) and an Azure Storage account, available in your subscription. The virtual network provides perimeter level security to the Databricks workspace and is protected via network security group. Within the workspace, you can create Databricks clusters by providing the worker and driver VM type and Databricks runtime version. The persisted data is available in your storage account, which can be Azure Blob Storage or Azure Data Lake Store. Once the cluster is created, you can run jobs via notebooks, REST APIs, ODBC/JDBC endpoints by attaching them to a specific cluster.
18+
At a high level, when you create an Azure Databricks workspace from the Azure portal, a [managed appliance](../managed-applications/overview.md) is deployed as an Azure resource in your subscription, in the chosen Azure region (for example, West US). This appliance is deployed in an [Azure Virtual Network](../virtual-network/virtual-networks-overview.md) with a [Network Security Group](../virtual-network/manage-network-security-group.md) and an Azure Storage account, available in your subscription. The virtual network provides perimeter level security to the Databricks workspace and is protected via network security group. Within the workspace, you can create Databricks clusters by providing the worker and driver VM type and Databricks runtime version. The persisted data is available in your storage account, which can be Azure Blob Storage or Azure Data Lake Storage. Once the cluster is created, you can run jobs via notebooks, REST APIs, ODBC/JDBC endpoints by attaching them to a specific cluster.
1919

2020
The Databricks control plane manages and monitors the Databricks workspace environment. Any management operation such as create cluster will be initiated from the control plane. All metadata, such as scheduled jobs, is stored in an Azure Database with geo-replication for fault tolerance.
2121

@@ -278,9 +278,9 @@ To create your own regional disaster recovery topology, follow these requirement
278278

279279
There's currently no straightforward way to migrate libraries from one workspace to another. Instead, reinstall those libraries into the new workspace manually. It is possible to automate using combination of [DBFS CLI](https://github.com/databricks/databricks-cli#dbfs-cli-examples) to upload custom libraries to the workspace and [Libraries CLI](https://github.com/databricks/databricks-cli#libraries-cli).
280280

281-
8. **Migrate Azure blob storage and Azure Data Lake Store mounts**
281+
8. **Migrate Azure blob storage and Azure Data Lake Storage mounts**
282282

283-
Manually remount all [Azure Blob storage](https://docs.azuredatabricks.net/spark/latest/data-sources/azure/azure-storage.html) and [Azure Data Lake Store (Gen 2)](https://docs.azuredatabricks.net/spark/latest/data-sources/azure/azure-datalake-gen2.html) mount points using a notebook-based solution. The storage resources would have been mounted in the primary workspace, and that has to be repeated in the secondary workspace. There is no external API for mounts.
283+
Manually remount all [Azure Blob storage](https://docs.azuredatabricks.net/spark/latest/data-sources/azure/azure-storage.html) and [Azure Data Lake Storage (Gen 2)](https://docs.azuredatabricks.net/spark/latest/data-sources/azure/azure-datalake-gen2.html) mount points using a notebook-based solution. The storage resources would have been mounted in the primary workspace, and that has to be repeated in the secondary workspace. There is no external API for mounts.
284284

285285
9. **Migrate cluster init scripts**
286286

articles/governance/policy/concepts/effects.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -196,7 +196,7 @@ following tag changes:
196196
{
197197
"operation": "addOrReplace",
198198
"field": "tags['Dept']",
199-
"field": "[parameters('DeptName')]"
199+
"value": "[parameters('DeptName')]"
200200
}
201201
]
202202
}
@@ -629,4 +629,4 @@ validate the right policies are affecting the right scopes.
629629
- Understand how to [programmatically create policies](../how-to/programmatically-create.md).
630630
- Learn how to [get compliance data](../how-to/getting-compliance-data.md).
631631
- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).
632-
- Review what a management group is with [Organize your resources with Azure management groups](../../management-groups/overview.md).
632+
- Review what a management group is with [Organize your resources with Azure management groups](../../management-groups/overview.md).

articles/virtual-machines/linux/tutorial-automate-vm-deployment.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -124,7 +124,7 @@ It takes a few minutes for the VM to be created, the packages to install, and th
124124
To allow web traffic to reach your VM, open port 80 from the Internet with [az vm open-port](/cli/azure/vm#az-vm-open-port):
125125

126126
```azurecli-interactive
127-
az vm open-port --port 80 --resource-group myResourceGroupAutomate --name myVM
127+
az vm open-port --port 80 --resource-group myResourceGroupAutomate --name myAutomatedVM
128128
```
129129

130130
## Test web app
@@ -163,7 +163,7 @@ For production use, you should import a valid certificate signed by trusted prov
163163
az keyvault certificate create \
164164
--vault-name $keyvault_name \
165165
--name mycert \
166-
--policy "$(az keyvault certificate get-default-policy)"
166+
--policy "$(az keyvault certificate get-default-policy --output json)"
167167
```
168168

169169

@@ -175,7 +175,7 @@ secret=$(az keyvault secret list-versions \
175175
--vault-name $keyvault_name \
176176
--name mycert \
177177
--query "[?attributes.enabled].id" --output tsv)
178-
vm_secret=$(az vm secret format --secret "$secret")
178+
vm_secret=$(az vm secret format --secret "$secret" --output json)
179179
```
180180

181181

@@ -254,7 +254,7 @@ To allow secure web traffic to reach your VM, open port 443 from the Internet wi
254254
```azurecli-interactive
255255
az vm open-port \
256256
--resource-group myResourceGroupAutomate \
257-
--name myVMSecured \
257+
--name myVMWithCerts \
258258
--port 443
259259
```
260260

includes/resource-manager-governance-tags.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -25,3 +25,6 @@ The following limitations apply to tags:
2525
* Tags applied to the resource group are not inherited by the resources in that resource group.
2626
* Tags can't be applied to classic resources such as Cloud Services.
2727
* Tag names can't contain these characters: `<`, `>`, `%`, `&`, `\`, `?`, `/`
28+
29+
> [!NOTE]
30+
> Currently Azure DNS zones and Traffic Manger services also don't allow the use of spaces in the tag.

0 commit comments

Comments
 (0)