You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-monitor/agents/azure-monitor-agent-migration.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -65,7 +65,7 @@ The **Azure Monitor Agent Migration Helper** workbook is a workbook-based Azure
65
65
66
66
## Understand your agents
67
67
68
-
Use the [DCR generator](./azure-monitor-agent-migration-tools.md#installing-and-using-dcr-config-generator) to convert your legacy agent configuration into [data collection rules](../essentials/data-collection-rule-overview.md) automatically.<sup>1</sup>
68
+
Use the [DCR generator](./azure-monitor-agent-migration-data-collection-rule-generator.md) to convert your legacy agent configuration into [data collection rules](../essentials/data-collection-rule-overview.md) automatically.<sup>1</sup>
69
69
To help understand your agents, review the following questions:
Copy file name to clipboardExpand all lines: articles/backup/backup-azure-about-mars.md
+7-3Lines changed: 7 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,10 +1,10 @@
1
1
---
2
2
title: About the MARS Agent
3
3
description: Learn how the MARS Agent supports the backup scenarios
4
-
ms.topic: conceptual
5
-
ms.date: 08/18/2023
4
+
ms.topic: overview
5
+
ms.date: 08/13/2024
6
6
ms.service: azure-backup
7
-
ms.custom: engagement-fy23
7
+
ms.custom: engagement-fy24
8
8
author: AbhishekMallick-MS
9
9
ms.author: v-abhmallick
10
10
---
@@ -50,6 +50,8 @@ The MARS agent supports the following recovery scenarios:
50
50
51
51
## Backup process
52
52
53
+
To back up files, folders, and the volume or system state from an on-premises computer to Azure using Microsoft Azure Recovery Services (MARS) agent:
54
+
53
55
1. From the Azure portal, create a [Recovery Services vault](install-mars-agent.md#create-a-recovery-services-vault), and choose files, folders, and the system state from the **Backup goals**.
54
56
2.[Configure your Recovery Services vault to securely save the backup passphrase to Azure Key vault](save-backup-passphrase-securely-in-azure-key-vault.md).
55
57
3.[Download the Recovery Services vault credentials and agent installer](./install-mars-agent.md#download-the-mars-agent) to an on-premises machine.
@@ -63,6 +65,8 @@ The following diagram shows the backup flow:
63
65
64
66
### Additional information
65
67
68
+
To proceed with the backup operation, review the following additional details:
69
+
66
70
- The **Initial Backup** (first backup) runs according to your backup settings. The MARS agent uses VSS to take a point-in-time snapshot of the volumes selected for backup. The agent only uses the Windows System Writer operation to capture the snapshot. It doesn't use any application VSS writers, and doesn't capture app-consistent snapshots. After VSS agent takes the snapshot, the MARS agent creates a virtual hard disk (VHD) in the cache folder you specified during the backup configuration. The agent also stores checksums for each data block.
67
71
68
72
-**Incremental backups** (subsequent backups) run according to the schedule you specify. During incremental backups, changed files are identified and a new VHD is created. The VHD is compressed and encrypted, and then it's sent to the vault. After the incremental backup finishes, the new VHD is merged with the VHD created after the initial replication. This merged VHD provides the latest state to be used for comparison for ongoing backup.
Copy file name to clipboardExpand all lines: articles/hdinsight/hadoop/apache-hadoop-deep-dive-advanced-analytics.md
+12-12Lines changed: 12 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ description: Learn how advanced analytics uses algorithms to process big data in
4
4
ms.service: azure-hdinsight
5
5
ms.topic: how-to
6
6
ms.custom: hdinsightactive
7
-
ms.date: 08/22/2023
7
+
ms.date: 08/13/2023
8
8
---
9
9
10
10
# Deep dive - advanced analytics
@@ -17,17 +17,17 @@ HDInsight provides the ability to obtain valuable insight from large amounts of
17
17
18
18
:::image type="content" source="./media/apache-hadoop-deep-dive-advanced-analytics/hdinsight-analytic-process.png" alt-text="Advanced analytics process flow." border="false":::
19
19
20
-
After you've identified the business problem and have started collecting and processing your data, you need to create a model that represents the question you wish to predict. Your model will use one or more machine learning algorithms to make the type of prediction that best fits your business needs. The majority of your data should be used to train your model, with the rest used to test or evaluate it.
20
+
After you identified the business problem and started collecting and processing your data, you need to create a model that represents the question you wish to predict. Your model uses one or more machine learning algorithms to make the type of prediction that best fits your business needs. Most your data should be used to train your model, with the rest used to test or evaluate it.
21
21
22
22
After you create, load, test, and evaluate your model, the next step is to deploy your model so that it begins supplying answers to your questions. The last step is to monitor your model's performance and tune it as necessary.
23
23
24
24
## Common types of algorithms
25
25
26
-
Advanced analytics solutions provide a set of machine learning algorithms. Here is a summary of the categories of algorithms and associated common business use cases.
26
+
Advanced analytics solutions provide a set of machine learning algorithms. Here's a summary of the categories of algorithms and associated common business use cases.
Along with selecting the best-fitting algorithm(s), you need to consider whether or not you need to provide data for training. Machine learning algorithms are categorized as follows:
30
+
Along with selecting one or more best-fitting algorithms, you need to consider whether or not you need to provide data for training. Machine learning algorithms are categorized as follows:
31
31
32
32
* Supervised - algorithm needs to be trained on a set of labeled data before it can provide results
33
33
* Semi-supervised - algorithm can be augmented by extra targets through interactive query by a trainer, which weren't available during initial stage of training
@@ -58,7 +58,7 @@ There are three scalable machine learning libraries that bring algorithmic model
58
58
59
59
*[**MLlib**](https://spark.apache.org/docs/latest/ml-guide.html) - MLlib contains the original API built on top of Spark RDDs.
60
60
*[**SparkML**](https://spark.apache.org/docs/1.2.2/ml-guide.html) - SparkML is a newer package that provides a higher-level API built on top of Spark DataFrames for constructing ML pipelines.
61
-
*[**MMLSpark**](https://github.com/Azure/mmlspark) - The Microsoft Machine Learning library for Apache Spark (MMLSpark) is designed to make data scientists more productive on Spark, to increase the rate of experimentation, and to leverage cutting-edge machine learning techniques, including deep learning, on very large datasets. The MMLSpark library simplifies common modeling tasks for building models in PySpark.
61
+
*[**MMLSpark**](https://github.com/Azure/mmlspark) - The Microsoft Machine Learning library for Apache Spark (MMLSpark) is designed to make data scientists more productive on Spark, to increase the rate of experimentation, and to leverage cutting-edge machine learning techniques, including deep learning, on large datasets. The MMLSpark library simplifies common modeling tasks for building models in PySpark.
62
62
63
63
### Azure Machine Learning and Apache Hive
64
64
@@ -72,28 +72,28 @@ There are three scalable machine learning libraries that bring algorithmic model
72
72
73
73
Let's review an example of an advanced analytics machine learning pipeline using HDInsight.
74
74
75
-
In this scenario you'll see how DNNs produced in a deep learning framework, Microsoft's Cognitive Toolkit (CNTK), can be operationalized for scoring large image collections stored in an Azure Blob Storage account using PySpark on an HDInsight Spark cluster. This approach is applied to a common DNN use case, aerial image classification, and can be used to identify recent patterns in urban development. You'll use a pre-trained image classification model. The model is pre-trained on the [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html) and has been applied to 10,000 withheld images.
75
+
In this scenario you see how DNNs produced in a deep learning framework, Microsoft's Cognitive Toolkit (CNTK), can be operationalized for scoring large image collections stored in an Azure Blob Storage account using PySpark on an HDInsight Spark cluster. This approach is applied to a common DNN use case, aerial image classification, and can be used to identify recent patterns in urban development. You use a pretrained image classification model. The model is pretrained on the [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html) and has been applied to 10,000 withheld images.
76
76
77
77
There are three key tasks in this advanced analytics scenario:
78
78
79
79
1. Create an Azure HDInsight Hadoop cluster with an Apache Spark 2.1.0 distribution.
80
80
2. Run a custom script to install Microsoft Cognitive Toolkit on all nodes of an Azure HDInsight Spark cluster.
81
-
3. Upload a pre-built Jupyter Notebook to your HDInsight Spark cluster to apply a trained Microsoft Cognitive Toolkit deep learning model to files in an Azure Blob Storage Account using the Spark Python API (PySpark).
81
+
3. Upload a prebuilt Jupyter Notebook to your HDInsight Spark cluster to apply a trained Microsoft Cognitive Toolkit deep learning model to files in an Azure Blob Storage Account using the Spark Python API (PySpark).
82
82
83
-
This example uses the CIFAR-10 image set compiled and distributed by Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. The CIFAR-10 dataset contains 60,000 32×32 color images belonging to 10 mutually exclusive classes:
83
+
This example uses the CIFAR-10 image set compiled and distributed by Alex Krizhevsky, Vinod Kumar, and Geoffrey Hinton. The CIFAR-10 dataset contains 60,000 32×32 color images belonging to 10 mutually exclusive classes:
84
84
85
85
:::image type="content" source="./media/apache-hadoop-deep-dive-advanced-analytics/machine-learning-images.png" alt-text="Machine Learning example images." border="false":::
86
86
87
87
For more information on the dataset, see Alex Krizhevsky's [Learning Multiple Layers of Features from Tiny Images](https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf).
88
88
89
-
The dataset was partitioned into a training set of 50,000 images and a test set of 10,000 images. The first set was used to train a twenty-layer-deep convolutional residual network (ResNet) model using Microsoft Cognitive Toolkit by following [this tutorial](https://github.com/Microsoft/CNTK/tree/master/Examples/Image/Classification/ResNet) from the Cognitive Toolkit GitHub repository. The remaining 10,000 images were used for testing the model's accuracy. This is where distributed computing comes into play: the task of pre-processing and scoring the images is highly parallelizable. With the saved trained model in hand, we used:
89
+
The dataset was partitioned into a training set of 50,000 images and a test set of 10,000 images. The first set was used to train a twenty-layer-deep convolutional residual network (ResNet) model using Microsoft Cognitive Toolkit by following [this tutorial](https://github.com/Microsoft/CNTK/tree/master/Examples/Image/Classification/ResNet) from the Cognitive Toolkit GitHub repository. The remaining 10,000 images were used for testing the model's accuracy. This is where distributed computing comes into play: the task of preprocessing and scoring the images is highly parallelizable. With the saved trained model in hand, we used:
90
90
91
91
* PySpark to distribute the images and trained model to the cluster's worker nodes.
92
-
* Python to pre-process the images on each node of the HDInsight Spark cluster.
93
-
* Cognitive Toolkit to load the model and score the pre-processed images on each node.
92
+
* Python to preprocess the images on each node of the HDInsight Spark cluster.
93
+
* Cognitive Toolkit to load the model and score the preprocessed images on each node.
94
94
* Jupyter Notebooks to run the PySpark script, aggregate the results, and use [Matplotlib](https://matplotlib.org/) to visualize the model performance.
95
95
96
-
The entire preprocessing/scoring of the 10,000 images takes less than one minute on a cluster with 4 worker nodes. The model accurately predicts the labels of ~9,100 (91%) images. A confusion matrix illustrates the most common classification errors. For example, the matrix shows that mislabeling dogs as cats and vice versa occurs more frequently than for other label pairs.
96
+
The entire preprocessing/scoring of the 10,000 images takes less than one minute on a cluster with four worker nodes. The model accurately predicts the labels of ~9,100 (91%) images. A confusion matrix illustrates the most common classification errors. For example, the matrix shows that mislabeling dogs as cats and vice versa occurs more frequently than for other label pairs.
Copy file name to clipboardExpand all lines: articles/hdinsight/hadoop/apache-hadoop-use-sqoop-mac-linux.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ description: Learn how to use Apache Sqoop to import and export between Apache H
4
4
ms.service: azure-hdinsight
5
5
ms.topic: how-to
6
6
ms.custom: hdinsightactive, linux-related-content
7
-
ms.date: 08/21/2023
7
+
ms.date: 08/13/2023
8
8
---
9
9
10
10
# Use Apache Sqoop to import and export data between Apache Hadoop on HDInsight and Azure SQL Database
@@ -138,9 +138,9 @@ From SQL to Azure storage.
138
138
139
139
* Both HDInsight and SQL Server must be on the same Azure Virtual Network.
140
140
141
-
For an example, see the [Connect HDInsight to your on-premises network](./../connect-on-premises-network.md) document.
141
+
For an example, see [How to connect HDInsight to your on-premises network](./../connect-on-premises-network.md) document.
142
142
143
-
For more information on using HDInsight with an Azure Virtual Network, see the [Extend HDInsight with Azure Virtual Network](../hdinsight-plan-virtual-network-deployment.md) document. For more information on Azure Virtual Network, see the [Virtual Network Overview](../../virtual-network/virtual-networks-overview.md) document.
143
+
For more information on using HDInsight with an Azure Virtual Network, see [how to extend HDInsight with Azure Virtual Network](../hdinsight-plan-virtual-network-deployment.md) document. For more information on Azure Virtual Network, see the [Virtual Network Overview](../../virtual-network/virtual-networks-overview.md) document.
144
144
145
145
* SQL Server must be configured to allow SQL authentication. For more information, see the [Choose an Authentication Mode](/sql/relational-databases/security/choose-an-authentication-mode) document.
description: Gives an overview of the Azure HDInsight Accelerated Writes feature, which uses premium managed disks to improve performance of the Apache HBase Write Ahead Log.
4
4
ms.service: azure-hdinsight
5
5
ms.topic: how-to
6
-
ms.date: 08/21/2023
6
+
ms.date: 08/13/2023
7
7
---
8
8
9
9
# Azure HDInsight Accelerated Writes for Apache HBase
0 commit comments