Skip to content

Commit b79b937

Browse files
authored
Merge pull request #209253 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to main to sync with https://github.com/MicrosoftDocs/azure-docs (branch main)
2 parents 460b144 + b46d8f6 commit b79b937

File tree

6 files changed

+12
-9
lines changed

6 files changed

+12
-9
lines changed

articles/machine-learning/concept-mlflow.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ With MLflow Tracking you can connect Azure Machine Learning as the backend of yo
4646

4747
* [Track ML experiments and models running locally or in the cloud](how-to-use-mlflow-cli-runs.md) with MLflow in Azure Machine Learning.
4848
* [Track Azure Databricks ML experiments](how-to-use-mlflow-azure-databricks.md) with MLflow in Azure Machine Learning.
49-
* [Track Azure Synapse Analytics ML experiments](how-to-use-mlflow-azure-databricks.md) with MLflow in Azure Machine Learning.
49+
* [Track Azure Synapse Analytics ML experiments](how-to-use-mlflow-azure-synapse.md) with MLflow in Azure Machine Learning.
5050

5151
> [!IMPORTANT]
5252
> - MLflow in R support is limited to tracking experiment's metrics, parameters and models on Azure Machine Learning jobs. RStudio or Jupyter Notebooks with R kernels are not supported. Model registries are not supported using the MLflow R SDK. As an alternative, use Azure ML CLI or Azure ML studio for model registration and management. View the following [R example about using the MLflow tracking client with Azure Machine Learning](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/single-step/r).

articles/storage/blobs/lifecycle-management-policy-configure.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -219,6 +219,9 @@ A lifecycle management policy must be read or written in full. Partial updates a
219219
> [!NOTE]
220220
> If you enable firewall rules for your storage account, lifecycle management requests may be blocked. You can unblock these requests by providing exceptions for trusted Microsoft services. For more information, see the **Exceptions** section in [Configure firewalls and virtual networks](../common/storage-network-security.md#exceptions).
221221
222+
> [!NOTE]
223+
> A lifecycle management policy can't change the tier of a blob that uses an encryption scope.
224+
222225
## See also
223226

224227
- [Optimize costs by automatically managing the data lifecycle](lifecycle-management-overview.md)

articles/virtual-machines/workloads/sap/sap-hana-high-availability-netapp-files-red-hat.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -500,12 +500,12 @@ Follow the steps in [Setting up Pacemaker on Red Hat Enterprise Linux](./high-av
500500
501501
### Implement the Python system replication hook SAPHanaSR
502502
503-
This is important step to optimize the integration with the cluster and improve the detection when a cluster failover is needed. It is highly recommended to configure the SAPHanaSR python hook.
503+
This is important step to optimize the integration with the cluster and improve the detection when a cluster failover is needed. It is highly recommended to configure the SAPHanaSR Python hook.
504504
505505
1. **[A]** Install the HANA "system replication hook". The hook needs to be installed on both HANA DB nodes.
506506
507507
> [!TIP]
508-
> The python hook can only be implemented for HANA 2.0.
508+
> The Python hook can only be implemented for HANA 2.0.
509509
510510
1. Prepare the hook as `root`.
511511

articles/virtual-machines/workloads/sap/sap-hana-high-availability-netapp-files-suse.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -479,7 +479,7 @@ Follow the steps in, [Setting up Pacemaker on SUSE Enterprise Linux](./high-avai
479479

480480
### Implement the Python system replication hook SAPHanaSR
481481

482-
This is an important step to optimize the integration with the cluster and improve the detection, when a cluster failover is needed. It is highly recommended to configure the SAPHanaSR python hook. Follow the steps mentioned in, [Implement the Python System Replication hook SAPHanaSR](./sap-hana-high-availability.md#implement-the-python-system-replication-hook-saphanasr)
482+
This is an important step to optimize the integration with the cluster and improve the detection, when a cluster failover is needed. It is highly recommended to configure the SAPHanaSR Python hook. Follow the steps mentioned in, [Implement the Python System Replication hook SAPHanaSR](./sap-hana-high-availability.md#implement-the-python-system-replication-hook-saphanasr)
483483

484484

485485
## Configure SAP HANA cluster resources

articles/virtual-machines/workloads/sap/sap-hana-high-availability-rhel.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -557,12 +557,12 @@ Follow the steps in [Setting up Pacemaker on Red Hat Enterprise Linux in Azure](
557557

558558
## Implement the Python system replication hook SAPHanaSR
559559

560-
This is important step to optimize the integration with the cluster and improve the detection when a cluster failover is needed. It is highly recommended to configure the SAPHanaSR python hook.
560+
This is important step to optimize the integration with the cluster and improve the detection when a cluster failover is needed. It is highly recommended to configure the SAPHanaSR Python hook.
561561

562562
1. **[A]** Install the HANA "system replication hook". The hook needs to be installed on both HANA DB nodes.
563563

564564
> [!TIP]
565-
> The python hook can only be implemented for HANA 2.0.
565+
> The Python hook can only be implemented for HANA 2.0.
566566
567567
1. Prepare the hook as `root`.
568568

articles/virtual-machines/workloads/sap/sap-hana-high-availability.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -492,13 +492,13 @@ The steps in this section use the following prefixes:
492492

493493
## Implement the Python system replication hook SAPHanaSR
494494

495-
This is important step to optimize the integration with the cluster and improve the detection when a cluster failover is needed. It is highly recommended to configure the SAPHanaSR python hook.
495+
This is important step to optimize the integration with the cluster and improve the detection when a cluster failover is needed. It is highly recommended to configure the SAPHanaSR Python hook.
496496

497497
1. **[A]** Install the HANA "system replication hook". The hook needs to be installed on both HANA DB nodes.
498498

499499
> [!TIP]
500500
> Verify that package SAPHanaSR is at least version 0.153 to be able to use the SAPHanaSR Python hook functionality.
501-
> The python hook can only be implemented for HANA 2.0.
501+
> The Python hook can only be implemented for HANA 2.0.
502502
503503
1. Prepare the hook as `root`.
504504

@@ -530,7 +530,7 @@ This is important step to optimize the integration with the cluster and improve
530530
2. **[A]** The cluster requires sudoers configuration on each cluster node for <sid\>adm. In this example that is achieved by creating a new file. Execute the commands as `root`.
531531
```bash
532532
cat << EOF > /etc/sudoers.d/20-saphana
533-
# Needed for SAPHanaSR python hook
533+
# Needed for SAPHanaSR Python hook
534534
hn1adm ALL=(ALL) NOPASSWD: /usr/sbin/crm_attribute -n hana_hn1_site_srHook_*
535535
EOF
536536
```

0 commit comments

Comments
 (0)