Skip to content

Commit 99dbc8c

Browse files
authored
Merge pull request #134442 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to master to sync with https://github.com/MicrosoftDocs/azure-docs (branch master)
2 parents dfaa282 + 1e27335 commit 99dbc8c

11 files changed

+33
-20
lines changed

articles/aks/faq.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -189,7 +189,7 @@ While AKS has resilience mechanisms to withstand such a config and recover from
189189

190190
## Can I use custom VM extensions?
191191

192-
No AKS is a managed service, and manipulation of the IaaS resources is not supported. To install custom components, etc. please leverage the Kubernetes APIs and mechanisms. For example, leverage DaemonSets to install required components.
192+
No, AKS is a managed service, and manipulation of the IaaS resources is not supported. To install custom components, etc. please leverage the Kubernetes APIs and mechanisms. For example, leverage DaemonSets to install required components.
193193

194194
## Does AKS store any customer data outside of the cluster's region?
195195

@@ -226,4 +226,4 @@ The feature to enable storing customer data in a single region is currently only
226226
[admission-controllers]: https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/
227227
[private-clusters-github-issue]: https://github.com/Azure/AKS/issues/948
228228
[csi-driver]: https://github.com/Azure/secrets-store-csi-driver-provider-azure
229-
[vm-sla]: https://azure.microsoft.com/support/legal/sla/virtual-machines/
229+
[vm-sla]: https://azure.microsoft.com/support/legal/sla/virtual-machines/

articles/app-service/app-service-key-vault-references.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -27,8 +27,8 @@ In order to read secrets from Key Vault, you need to have a vault created and gi
2727
2828
1. Create an [access policy in Key Vault](../key-vault/general/secure-your-key-vault.md#key-vault-access-policies) for the application identity you created earlier. Enable the "Get" secret permission on this policy. Do not configure the "authorized application" or `applicationId` settings, as this is not compatible with a managed identity.
2929

30-
> [!IMPORTANT]
31-
> Key Vault references are not presently able to resolve secrets stored in a key vault with [network restrictions](../key-vault/general/overview-vnet-service-endpoints.md).
30+
> [!IMPORTANT]
31+
> Key Vault references are not presently able to resolve secrets stored in a key vault with [network restrictions](../key-vault/general/overview-vnet-service-endpoints.md) unless the app is hosted within an [App Service Environment](./environment/intro.md).
3232
3333
## Reference syntax
3434

@@ -42,9 +42,9 @@ A Key Vault reference is of the form `@Microsoft.KeyVault({referenceString})`, w
4242
4343
> [!NOTE]
4444
> Versions are currently required. When rotating secrets, you will need to update the version in your application configuration.
45-
4645
For example, a complete reference would look like the following:
4746

47+
4848
```
4949
@Microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/ec96f02080254f109c51a1f14cdb1931)
5050
```

articles/app-service/deploy-container-github-action.md

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -186,15 +186,17 @@ jobs:
186186

187187
## Deploy to an App Service container
188188

189-
To deploy your image to a custom container in App Service, use the `azure/webapps-deploy@v2` action. This action has five parameters:
189+
To deploy your image to a custom container in App Service, use the `azure/webapps-deploy@v2` action. This action has seven parameters:
190190

191191
| **Parameter** | **Explanation** |
192192
|---------|---------|
193193
| **app-name** | (Required) Name of the App Service app |
194-
| **publish-profile** | (Optional) Publish profile file contents with Web Deploy secrets |
195-
| **images** | Fully qualified container image(s) name. For example, 'myregistry.azurecr.io/nginx:latest' or 'python:3.7.2-alpine/'. For multi-container scenario multiple container image names can be provided (multi-line separated) |
194+
| **publish-profile** | (Optional) Applies to Web Apps(Windows and Linux) and Web App Containers(linux). Multi container scenario not supported. Publish profile (\*.publishsettings) file contents with Web Deploy secrets |
196195
| **slot-name** | (Optional) Enter an existing Slot other than the Production slot |
197-
| **configuration-file** | (Optional) Path of the Docker-Compose file |
196+
| **package** | (Optional) Applies to Web App only: Path to package or folder. \*.zip, \*.war, \*.jar or a folder to deploy |
197+
| **images** | (Required) Applies to Web App Containers only: Specify the fully qualified container image(s) name. For example, 'myregistry.azurecr.io/nginx:latest' or 'python:3.7.2-alpine/'. For a multi-container app, multiple container image names can be provided (multi-line separated) |
198+
| **configuration-file** | (Optional) Applies to Web App Containers only: Path of the Docker-Compose file. Should be a fully qualified path or relative to the default working directory. Required for multi-container apps. |
199+
| **startup-command** | (Optional) Enter the start-up command. For ex. dotnet run or dotnet filename.dll |
198200

199201
# [Publish profile](#tab/publish-profile)
200202

@@ -283,4 +285,4 @@ You can find our set of Actions grouped into different repositories on GitHub, e
283285

284286
- [K8s deploy](https://github.com/Azure/k8s-deploy)
285287

286-
- [Starter Workflows](https://github.com/actions/starter-workflows)
288+
- [Starter Workflows](https://github.com/actions/starter-workflows)

articles/azure-monitor/insights/network-performance-monitor-faq.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -91,7 +91,7 @@ If a hop is red, it signifies that it is part of at-least one unhealthy path. NP
9191
NPM uses a probabilistic mechanism to assign fault-probabilities to each network path, network segment, and the constituent network hops based on the number of unhealthy paths they are a part of. As the network segments and hops become part of more number of unhealthy paths, the fault-probability associated with them increases. This algorithm works best when you have many nodes with NPM agent connected to each other as this increases the data points for calculating the fault-probabilities.
9292

9393
### How can I create alerts in NPM?
94-
Creating alerts from NPM UI is currently failing due to a issue. Please create alerts manually.
94+
Currently, creating alerts from the NPM UI is failing due to a known issue. Please [create alerts manually](../platform/alerts-log.md).
9595

9696
### What are the default Log Analytics queries for alerts
9797
Performance monitor query

articles/azure-monitor/log-query/query-optimization.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -106,7 +106,7 @@ Syslog
106106
| count
107107
```
108108

109-
In some cases the evaluated column is created implicitly by the query processing enine since the filtering is done not just on the field:
109+
In some cases the evaluated column is created implicitly by the query processing engine since the filtering is done not just on the field:
110110
```Kusto
111111
//less efficient
112112
SecurityEvent

articles/cosmos-db/how-to-manage-indexing-policy.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -743,6 +743,13 @@ Update the container with changes
743743
```python
744744
response = database_client.replace_container(container_client, container['partitionKey'], indexingPolicy)
745745
```
746+
747+
Retrieve the index transformation progress from the response headers
748+
```python
749+
container_client.read(populate_quota_info = True,
750+
response_hook = lambda h,p: print(h['x-ms-documentdb-collection-index-transformation-progress']))
751+
```
752+
746753
---
747754

748755
## Next steps

articles/cosmos-db/performance-tips-dotnet-sdk-v3-sql.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -159,7 +159,7 @@ When you're working on Azure Functions, instances should also follow the existin
159159
For workloads that have heavy create payloads, set the `EnableContentResponseOnWrite` request option to `false`. The service will no longer return the created or updated resource to the SDK. Normally, because the application has the object that's being created, it doesn't need the service to return it. The header values are still accessible, like a request charge. Disabling the content response can help improve performance, because the SDK no longer needs to allocate memory or serialize the body of the response. It also reduces the network bandwidth usage to further help performance.
160160

161161
```csharp
162-
ItemRequestOption requestOptions = new ItemRequestOptions() { EnableContentResponseOnWrite = false };
162+
ItemRequestOptions requestOptions = new ItemRequestOptions() { EnableContentResponseOnWrite = false };
163163
ItemResponse<Book> itemResponse = await this.container.CreateItemAsync<Book>(book, new PartitionKey(book.pk), requestOptions);
164164
// Resource will be null
165165
itemResponse.Resource

articles/databox-online/azure-stack-edge-gpu-recover-device-failure.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,8 @@ To prepare for a potential device failure, you may have deployed one the followi
5858
| Third-party software | Reference to the solution |
5959
|--------------------------------|---------------------------------------------------------|
6060
| Cohesity | [https://www.cohesity.com/solution/cloud/azure/](https://www.cohesity.com/solution/cloud/azure/) <br> For details, contact Cohesity. |
61-
| Veritas | For details, contact Veritas. |
61+
| Commvault | https://www.commvault.com/azure <br> For details, contact Commvault. |
62+
| Veritas | http://veritas.com/azure <br> For details, contact Veritas. |
6263

6364
After the replacement device is fully configured, enable the device for local storage.
6465

@@ -78,7 +79,8 @@ To prepare for a potential device failure, you may have deployed one of the foll
7879
|-------------------------|----------------|--------------------------------------------------------------------------|
7980
| Microsoft Azure Recovery Services (MARS) agent for Azure Backup | Windows | [About MARS agent](/azure/backup/backup-azure-about-mars) |
8081
| Cohesity | Windows, Linux | [Microsoft Azure Integration, Backup and Recovery solution brief](https://www.cohesity.com/solution/cloud/azure) <br>For details, contact Cohesity. |
81-
| Veritas | Windows, Linux | For details, contact Veritas. |
82+
| Commvault | Windows, Linux | https://www.commvault.com/azure <br> For details, contact Commvault.
83+
| Veritas | Windows, Linux | http://veritas.com/azure <br> For details, contact Veritas. |
8284

8385
After the replacement device is fully configured, you can redeploy the VMs with the VM image previously used.
8486

articles/hdinsight/hdinsight-business-continuity-architecture.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ This article gives a few examples of business continuity architectures you might
1919
2020
## Apache Hive and Interactive Query
2121

22-
[Hive Replication V2](https://cwiki.apache.org/confluence/display/Hive/HiveReplicationv2Development#HiveReplicationv2Development-REPLSTATUS) is the recommended for business continuity in HDInsight Hive and Interactive query clusters. The persistent sections of a standalone Hive cluster that need to be replicated are the Storage Layer and the Hive metastore. Hive clusters in a multi-user scenario with Enterprise Security Package need Azure Active Directory Domain Services and Ranger Metastore.
22+
[Hive Replication V2](https://cwiki.apache.org/confluence/display/Hive/HiveReplicationv2Development#HiveReplicationv2Development-REPLSTATUS) is recommended for business continuity in HDInsight Hive and Interactive query clusters. The persistent sections of a standalone Hive cluster that need to be replicated are the Storage Layer and the Hive metastore. Hive clusters in a multi-user scenario with Enterprise Security Package need Azure Active Directory Domain Services and Ranger Metastore.
2323

2424
:::image type="content" source="./media/hdinsight-business-continuity-architecture/hive-interactive-query.png" alt-text="Hive and interactive query architecture":::
2525

@@ -53,6 +53,8 @@ In an *active primary with standby secondary*, applications write to the active
5353

5454
:::image type="content" source="./media/hdinsight-business-continuity-architecture/active-primary-standby-secondary.png" alt-text="active primary with standby secondary":::
5555

56+
For more information on Hive replication and code samples refer [Apache Hive replication in Azure HDInsight clusters](https://docs.microsoft.com/azure/hdinsight/interactive-query/apache-hive-replication)
57+
5658
## Apache Spark
5759

5860
Spark workloads may or may not involve a Hive component. To enable Spark SQL workloads to read and write data from Hive, HDInsight Spark clusters share Hive custom metastores from Hive/Interactive query clusters in the same region. In such scenarios, cross region replication of Spark workloads must also accompany the replication of Hive metastores and storage. The failover scenarios in this section apply to both:
@@ -203,4 +205,4 @@ To learn more about the items discussed in this article, see:
203205

204206
* [Azure HDInsight business continuity](./hdinsight-business-continuity.md)
205207
* [Azure HDInsight highly available solution architecture case study](./hdinsight-high-availability-case-study.md)
206-
* [What is Apache Hive and HiveQL on Azure HDInsight?](./hadoop/hdinsight-use-hive.md)
208+
* [What is Apache Hive and HiveQL on Azure HDInsight?](./hadoop/hdinsight-use-hive.md)

articles/machine-learning/how-to-configure-environment.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -271,7 +271,7 @@ If install was successful, the imported library should look like one of these:
271271
If the cluster was created with Databricks non ML runtime 7.1 or above, run the following command in the first cell of your notebook to install the AML SDK.
272272

273273
```
274-
%pip install -r https://aka.ms/automl_linux_requirements.txt
274+
%pip install --upgrade --force-reinstall -r https://aka.ms/automl_linux_requirements.txt
275275
```
276276
For Databricks non ML runtime 7.0 and lower, install the AML SDK using the [init script](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/azure-databricks/automl/README.md).
277277

0 commit comments

Comments
 (0)