Skip to content

Commit 9360ea9

Browse files
Merge pull request #172148 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to master to sync with https://github.com/MicrosoftDocs/azure-docs (branch master)
2 parents 85e44fd + 0ccc018 commit 9360ea9

File tree

8 files changed

+19
-17
lines changed

8 files changed

+19
-17
lines changed

articles/aks/spot-node-pool.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ By default, you create a node pool with a *priority* of *Regular* in your AKS cl
6666
The command also enables the [cluster autoscaler][cluster-autoscaler], which is recommended to use with spot node pools. Based on the workloads running in your cluster, the cluster autoscaler scales up and scales down the number of nodes in the node pool. For spot node pools, the cluster autoscaler will scale up the number of nodes after an eviction if additional nodes are still needed. If you change the maximum number of nodes a node pool can have, you also need to adjust the `maxCount` value associated with the cluster autoscaler. If you do not use a cluster autoscaler, upon eviction, the spot pool will eventually decrease to zero and require a manual operation to receive any additional spot nodes.
6767

6868
> [!Important]
69-
> Only schedule workloads on spot node pools that can handle interruptions, such as batch processing jobs and testing environments. It is recommended that you set up [taints and tolerations][taints-tolerations] on your spot node pool to ensure that only workloads that can handle node evictions are scheduled on a spot node pool. For example, the above command ny default adds a taint of *kubernetes.azure.com/scalesetpriority=spot:NoSchedule* so only pods with a corresponding toleration are scheduled on this node.
69+
> Only schedule workloads on spot node pools that can handle interruptions, such as batch processing jobs and testing environments. It is recommended that you set up [taints and tolerations][taints-tolerations] on your spot node pool to ensure that only workloads that can handle node evictions are scheduled on a spot node pool. For example, the above command by default adds a taint of *kubernetes.azure.com/scalesetpriority=spot:NoSchedule* so only pods with a corresponding toleration are scheduled on this node.
7070
7171
## Verify the spot node pool
7272

@@ -121,4 +121,4 @@ In this article, you learned how to add a spot node pool to an AKS cluster. For
121121
[spot-toleration]: #verify-the-spot-node-pool
122122
[taints-tolerations]: operator-best-practices-advanced-scheduler.md#provide-dedicated-nodes-using-taints-and-tolerations
123123
[use-multiple-node-pools]: use-multiple-node-pools.md
124-
[vmss-spot]: ../virtual-machine-scale-sets/use-spot.md
124+
[vmss-spot]: ../virtual-machine-scale-sets/use-spot.md

articles/app-service/manage-create-arc-environment.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -181,14 +181,13 @@ While a [Log Analytic workspace](../azure-monitor/logs/quick-create-workspace.md
181181
--workspace-name $workspaceName \
182182
--query customerId \
183183
--output tsv)
184-
logAnalyticsWorkspaceIdEnc=$(printf %s $logAnalyticsWorkspaceId | base64) # Needed for the next step
184+
logAnalyticsWorkspaceIdEnc=$(printf %s $logAnalyticsWorkspaceId | base64 -w0) # Needed for the next step
185185
logAnalyticsKey=$(az monitor log-analytics workspace get-shared-keys \
186186
--resource-group $groupName \
187187
--workspace-name $workspaceName \
188188
--query primarySharedKey \
189189
--output tsv)
190-
logAnalyticsKeyEncWithSpace=$(printf %s $logAnalyticsKey | base64)
191-
logAnalyticsKeyEnc=$(echo -n "${logAnalyticsKeyEncWithSpace//[[:space:]]/}") # Needed for the next step
190+
logAnalyticsKeyEnc=$(printf %s $logAnalyticsKey | base64 -w0) # Needed for the next step
192191
```
193192
194193
# [PowerShell](#tab/powershell)

articles/azure-government/documentation-government-get-started-connect-with-ps.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ When you start PowerShell, you have to tell Azure PowerShell to connect to Azure
4141

4242
| Connection type | Command |
4343
| --- | --- |
44-
| [Azure](/powershell/module/az.accounts/Connect-AzAccount) commands |`Connect-AzAccount -EnvironmentName AzureUSGovernment` |
44+
| [Azure](/powershell/module/az.accounts/Connect-AzAccount) commands |`Connect-AzAccount -Environment AzureUSGovernment` |
4545
| [Azure Active Directory](/powershell/module/azuread/connect-azuread) commands |`Connect-AzureAD -AzureEnvironmentName AzureUSGovernment` |
4646
| [Azure (Classic deployment model)](/powershell/module/servicemanagement/azure.service/add-azureaccount) commands |`Add-AzureAccount -Environment AzureUSGovernment` |
4747
| [Azure Active Directory (Classic deployment model)](/previous-versions/azure/jj151815(v=azure.100)) commands |`Connect-MsolService -AzureEnvironment UsGovernment` |
@@ -74,4 +74,4 @@ Get-AzureLocation # For classic deployment model
7474
This quickstart showed you how to use PowerShell to connect to Azure Government. Once you are connected to Azure Government, you may want to explore Azure services. Make sure you check out the variations, described in [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md). To learn more about Azure services, continue to the Azure documentation.
7575

7676
> [!div class="nextstepaction"]
77-
> [Azure documentation](../index.yml)
77+
> [Azure documentation](../index.yml)

articles/azure-resource-manager/templates/overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ If you're trying to decide between using ARM templates and one of the other infr
3131

3232
![Template deployment comparison](./media/overview/template-processing.png)
3333

34-
* **Modular files**: You can break your templates into smaller, reusable components and link them together at deployment time. You can also nest one template inside another templates.
34+
* **Modular files**: You can break your templates into smaller, reusable components and link them together at deployment time. You can also nest one template inside another template.
3535

3636
* **Create any Azure resource**: You can immediately use new Azure services and features in templates. As soon as a resource provider introduces new resources, you can deploy those resources through templates. You don't have to wait for tools or modules to be updated before using the new services.
3737

articles/azure-sql/database/ledger-overview.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ Typical patterns for solving this problem involve replicating data from the bloc
5555

5656
Each transaction that the database receives is cryptographically hashed (SHA-256). The hash function uses the value of the transaction, along with the hash of the previous transaction, as input to the hash function. (The value includes hashes of the rows contained in the transaction.) The function cryptographically links all transactions together, like a blockchain.
5757

58-
Cryptographically hashed ([database digests](#database-digests)) represent the state of the database. They're periodically generated and stored outside Azure SQL Database in a tamper-proof storage location. An example of a storage location is the [immutable storage feature of Azure Blob Storage](../../storage/blobs/immutable-storage-overview.md) or [Azure Confidential Ledger](../../confidential-ledger/index.yml). Database digests are later used to verify the integrity of the database by comparing the value of the hash in the digest against the calculated hashes in database.
58+
Cryptographically hashed [database digests](#database-digests) represent the state of the database. They're periodically generated and stored outside Azure SQL Database in a tamper-proof storage location. An example of a storage location is the [immutable storage feature of Azure Blob Storage](../../storage/blobs/immutable-storage-overview.md) or [Azure Confidential Ledger](../../confidential-ledger/index.yml). Database digests are later used to verify the integrity of the database by comparing the value of the hash in the digest against the calculated hashes in database.
5959

6060
Ledger functionality is introduced to tables in Azure SQL Database in two forms:
6161

@@ -126,4 +126,4 @@ Ideally, users should run ledger verification only when the organization that's
126126

127127
- [Quickstart: Create a SQL database with ledger enabled](ledger-create-a-single-database-with-ledger-enabled.md)
128128
- [Access the digests stored in Azure Confidential Ledger](ledger-how-to-access-acl-digest.md)
129-
- [Verify a ledger table to detect tampering](ledger-verify-database.md)
129+
- [Verify a ledger table to detect tampering](ledger-verify-database.md)

articles/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -140,8 +140,9 @@ For your SQL Server availability group or failover cluster instance, consider th
140140
* If optimizing SQL Server VM performance does not resolve your unexpected failovers, consider [relaxing the monitoring](hadr-cluster-best-practices.md#relaxed-monitoring) for the availability group or failover cluster instance. However, doing so may not address the underlying source of the issue and could mask symptoms by reducing the likelihood of failure. You may still need to investigate and address the underlying root cause. For Windows Server 2012 or higher, use the following recommended values:
141141
- **Lease timeout**: Use this equation to calculate the maximum lease time out value:
142142
`Lease timeout < (2 * SameSubnetThreshold * SameSubnetDelay)`.
143-
Start with 40 seconds. If you're using the relaxed `SameSubnetThreshold` and `SameSubnetDelay` values recommended previously, do not exceed 80 seconds for the lease timeout value.
144-
- **Max failures in a specified period**: Set this value to 6.
143+
Start with 40 seconds. If you're using the relaxed `SameSubnetThreshold` and `SameSubnetDelay` values recommended previously, do not exceed 80 seconds for the lease timeout value.
144+
- **Max failures in a specified period**: You can set this value to 6.
145+
- **Healthcheck timeout**: You can set this value to 60000 initially, adjust as necessary.
145146
* When using the virtual network name (VNN) to connect to your HADR solution, specify `MultiSubnetFailover = true` in the connection string, even if your cluster only spans one subnet.
146147
- If the client does not support `MultiSubnetFailover = True` you may need to set `RegisterAllProvidersIP = 0` and `HostRecordTTL = 300` to cache client credentials for shorter durations. However, doing so may cause additional queries to the DNS server.
147148
- To connect to your HADR solution using the distributed network name (DNN), consider the following:

articles/frontdoor/front-door-route-matching.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ ms.date: 09/28/2020
1313
ms.author: duau
1414
---
1515

16-
# ​​How requests are matched to a routing rule
16+
# How requests are matched to a routing rule
1717

1818
After establishing a connection and completing a TLS handshake, when a request lands on a Front Door environment one of the first things that Front Door does is determine which particular routing rule to match the request to and then take the defined action in the configuration. The following document explains how Front Door determines which Route configuration to use when processing an HTTP request.
1919

@@ -64,8 +64,8 @@ If the following incoming requests were sent to Front Door, they would match aga
6464
### Path matching
6565
After determining the specific frontend host and filtering possible routing rules to just the routes with that frontend host, Front Door then filters the routing rules based on the request path. We use a similar logic as frontend hosts:
6666

67-
1. Look for any routing rule with an exact match on the Path
68-
2. If no exact match Paths, look for routing rules with a wildcard Path that matches
67+
1. Look for any routing rule with an exact match on the Path.
68+
2. If no exact match Paths, look for routing rules with a wildcard Path that matches.
6969
3. If no routing rules are found with a matching Path, then reject the request and return a 400: Bad Request error HTTP response.
7070

7171
>[!NOTE]
@@ -118,7 +118,8 @@ Given that configuration, the following example matching table would result:
118118
> | profile.domain.com/other | None. Error 400: Bad Request |
119119
120120
### Routing decision
121-
Once we've matched to a single Front Door routing rule, we then need to choose how to process the request. If for the matched routing rule, Front Door has a cached response available then the same gets served back to the client. Otherwise, the next thing that gets evaluated is whether you have configured [URL Rewrite (custom forwarding path)](front-door-url-rewrite.md) for the matched routing rule or not. If there isn't a custom forwarding path defined, then the request gets forwarded to the appropriate backend in the configured backend pool as is. Else, the request path is updated as per the [custom forwarding path](front-door-url-rewrite.md) defined and then forward to the backend.
121+
122+
After you have matched to a single Front Door routing rule, choose how to process the request. If Front Door has a cached response available for the matched routing rule, the cached response is served back to the client. If Front Door doesn't have a cached response for the matched routing rule, what's evaluated next is whether you have configured [URL wewrite (a custom forwarding path)](front-door-url-rewrite.md) for the matched routing rule. If no custom forwarding path is defined, the request is forwarded to the appropriate backend in the configured backend pool as-is. If a custom forwarding path has been defined, the request path is updated per the defined [custom forwarding path](front-door-url-rewrite.md) and then forwarded to the backend.
122123

123124
## Next steps
124125

articles/managed-instance-apache-cassandra/create-cluster-portal.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -112,7 +112,8 @@ export SSL_VALIDATE=false
112112

113113
# Connect to CQLSH (replace <IP> with the private IP addresses of the nodes in your Datacenter):
114114
host=("<IP>" "<IP>" "<IP>")
115-
cqlsh $host 9042 -u cassandra -p cassandra --ssl
115+
initial_admin_password="Password provided when creating the cluster"
116+
cqlsh $host 9042 -u cassandra -p $initial_admin_password--ssl
116117
```
117118

118119
## Troubleshooting

0 commit comments

Comments
 (0)