You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/hdinsight/spark/apache-spark-microsoft-cognitive-toolkit.md
+37-45Lines changed: 37 additions & 45 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,28 +2,27 @@
2
2
title: Microsoft Cognitive Toolkit with Apache Spark - Azure HDInsight
3
3
description: Learn how a trained Microsoft Cognitive Toolkit deep learning model can be applied to a dataset using the Spark Python API in an Azure HDInsight Spark cluster.
4
4
author: hrasheed-msft
5
+
ms.author: hrasheed
5
6
ms.reviewer: jasonh
6
-
7
7
ms.service: hdinsight
8
-
ms.custom: hdinsightactive
9
8
ms.topic: conceptual
10
-
ms.date: 11/28/2017
11
-
ms.author: hrasheed
12
-
9
+
ms.custom: hdinsightactive
10
+
ms.date: 01/14/2020
13
11
---
12
+
14
13
# Use Microsoft Cognitive Toolkit deep learning model with Azure HDInsight Spark cluster
15
14
16
15
In this article, you do the following steps.
17
16
18
-
1. Run a custom script to install [Microsoft Cognitive Toolkit](https://www.microsoft.com/en-us/cognitive-toolkit/) on an Azure HDInsight Spark cluster.
17
+
1. Run a custom script to install [Microsoft Cognitive Toolkit](https://docs.microsoft.com/cognitive-toolkit/) on an Azure HDInsight Spark cluster.
19
18
20
-
2. Upload a [Jupyter Notebook](https://jupyter.org/) to the [Apache Spark](https://spark.apache.org/) cluster to see how to apply a trained Microsoft Cognitive Toolkit deep learning model to files in an Azure Blob Storage Account using the [Spark Python API (PySpark)](https://spark.apache.org/docs/0.9.0/python-programming-guide.html)
19
+
2. Upload a [Jupyter Notebook](https://jupyter.org/) to the [Apache Spark](https://spark.apache.org/) cluster to see how to apply a trained Microsoft Cognitive Toolkit deep learning model to files in an Azure Blob Storage Account using the [Spark Python API (PySpark)](https://spark.apache.org/docs/latest/api/python/index.html)
21
20
22
21
## Prerequisites
23
22
24
-
***An Azure subscription**. Before you begin this article, you must have an Azure subscription. See [Create your free Azure account today](https://azure.microsoft.com/free).
23
+
* An Apache Spark cluster on HDInsight. See [Create an Apache Spark cluster](./apache-spark-jupyter-spark-sql-use-portal.md).
25
24
26
-
***Azure HDInsight Spark cluster**. For this article, create a Spark 2.0 cluster. For instructions, see [Create Apache Spark cluster in Azure HDInsight](apache-spark-jupyter-spark-sql.md).
25
+
*Familiarity with using Jupyter Notebooks with Spark on HDInsight. For more information, see [Load data and run queries with Apache Spark on HDInsight](./apache-spark-load-data-run-query.md).
27
26
28
27
## How does this solution flow?
29
28
@@ -34,68 +33,69 @@ This solution is divided between this article and a Jupyter notebook that you up
34
33
35
34
The following remaining steps are covered in the Jupyter notebook.
36
35
37
-
- Load sample images into a Spark Resiliant Distributed Dataset or RDD.
38
-
- Load modules and define presets.
39
-
- Download the dataset locally on the Spark cluster.
40
-
- Convert the dataset into an RDD.
41
-
- Score the images using a trained Cognitive Toolkit model.
42
-
- Download the trained Cognitive Toolkit model to the Spark cluster.
43
-
- Define functions to be used by worker nodes.
44
-
- Score the images on worker nodes.
45
-
- Evaluate model accuracy.
46
-
36
+
* Load sample images into a Spark Resilient Distributed Dataset or RDD.
37
+
* Load modules and define presets.
38
+
* Download the dataset locally on the Spark cluster.
39
+
* Convert the dataset into an RDD.
40
+
* Score the images using a trained Cognitive Toolkit model.
41
+
* Download the trained Cognitive Toolkit model to the Spark cluster.
42
+
* Define functions to be used by worker nodes.
43
+
* Score the images on worker nodes.
44
+
* Evaluate model accuracy.
47
45
48
46
## Install Microsoft Cognitive Toolkit
49
47
50
-
You can install Microsoft Cognitive Toolkit on a Spark cluster using script action. Script action uses custom scripts to install components on the cluster that are not available by default. You can use the custom script from the Azure portal, by using HDInsight .NET SDK, or by using Azure PowerShell. You can also use the script to install the toolkit either as part of cluster creation, or after the cluster is up and running.
48
+
You can install Microsoft Cognitive Toolkit on a Spark cluster using script action. Script action uses custom scripts to install components on the cluster that aren't available by default. You can use the custom script from the Azure portal, by using HDInsight .NET SDK, or by using Azure PowerShell. You can also use the script to install the toolkit either as part of cluster creation, or after the cluster is up and running.
51
49
52
50
In this article, we use the portal to install the toolkit, after the cluster has been created. For other ways to run the custom script, see [Customize HDInsight clusters using Script Action](../hdinsight-hadoop-customize-cluster-linux.md).
53
51
54
52
### Using the Azure portal
55
53
56
-
For instructions on how to use the Azure portal to run script action, see [Customize HDInsight clusters using Script Action](../hdinsight-hadoop-customize-cluster-linux.md#use-a-script-action-during-cluster-creation). Make sure you provide the following inputs to install Microsoft Cognitive Toolkit.
57
-
58
-
* Provide a value for the script action name.
59
-
60
-
* For **Bash script URI**, enter `https://raw.githubusercontent.com/Azure-Samples/hdinsight-pyspark-cntk-integration/master/cntk-install.sh`.
54
+
For instructions on how to use the Azure portal to run script action, see [Customize HDInsight clusters using Script Action](../hdinsight-hadoop-customize-cluster-linux.md#use-a-script-action-during-cluster-creation). Make sure you provide the following inputs to install Microsoft Cognitive Toolkit. Use the following values for your script action:
61
55
62
-
* Make sure you run the script only on the head and worker nodes and clear all the other checkboxes.
## Upload the Jupyter notebook to Azure HDInsight Spark cluster
67
65
68
66
To use the Microsoft Cognitive Toolkit with the Azure HDInsight Spark cluster, you must load the Jupyter notebook **CNTK_model_scoring_on_Spark_walkthrough.ipynb** to the Azure HDInsight Spark cluster. This notebook is available on GitHub at [https://github.com/Azure-Samples/hdinsight-pyspark-cntk-integration](https://github.com/Azure-Samples/hdinsight-pyspark-cntk-integration).
69
67
70
-
1. Clone the GitHub repository [https://github.com/Azure-Samples/hdinsight-pyspark-cntk-integration](https://github.com/Azure-Samples/hdinsight-pyspark-cntk-integration). For instructions to clone, see [Cloning a repository](https://help.github.com/articles/cloning-a-repository/).
71
-
72
-
2. From the Azure portal, open the Spark cluster blade that you already provisioned, click **Cluster Dashboard**, and then click **Jupyter notebook**.
68
+
1. Download and unzip [https://github.com/Azure-Samples/hdinsight-pyspark-cntk-integration](https://github.com/Azure-Samples/hdinsight-pyspark-cntk-integration).
73
69
74
-
You can also launch the Jupyter notebook by going to the URL `https://<clustername>.azurehdinsight.net/jupyter/`. Replace \<clustername> with the name of your HDInsight cluster.
70
+
1. From a web browser, navigate to `https://CLUSTERNAME.azurehdinsight.net/jupyter`, where `CLUSTERNAME` is the name of your cluster.
75
71
76
-
3. From the Jupyter notebook, click**Upload** in the top-right corner and then navigate to the location where you cloned the GitHub repository.
72
+
1. From the Jupyter notebook, select**Upload** in the top-right corner and then navigate to the download and select file `CNTK_model_scoring_on_Spark_walkthrough.ipynb`.
77
73
78
-

74
+

79
75
80
-
4. Click**Upload** again.
76
+
1. Select**Upload** again.
81
77
82
-
5. After the notebook is uploaded, click the name of the notebook and then follow the instructions in the notebook itself on how to load the data set and perform the article.
78
+
1. After the notebook is uploaded, click the name of the notebook and then follow the instructions in the notebook itself on how to load the data set and perform the article.
83
79
84
80
## See also
81
+
85
82
*[Overview: Apache Spark on Azure HDInsight](apache-spark-overview.md)
86
83
87
84
### Scenarios
85
+
88
86
*[Apache Spark with BI: Perform interactive data analysis using Spark in HDInsight with BI tools](apache-spark-use-bi-tools.md)
89
87
*[Apache Spark with Machine Learning: Use Spark in HDInsight for analyzing building temperature using HVAC data](apache-spark-ipython-notebook-machine-learning.md)
90
88
*[Apache Spark with Machine Learning: Use Spark in HDInsight to predict food inspection results](apache-spark-machine-learning-mllib-ipython.md)
91
89
*[Website log analysis using Apache Spark in HDInsight](apache-spark-custom-library-website-log-analysis.md)
92
90
*[Application Insight telemetry data analysis using Apache Spark in HDInsight](apache-spark-analyze-application-insight-logs.md)
93
91
94
92
### Create and run applications
93
+
95
94
*[Create a standalone application using Scala](apache-spark-create-standalone-application.md)
96
95
*[Run jobs remotely on an Apache Spark cluster using Apache Livy](apache-spark-livy-rest-interface.md)
97
96
98
97
### Tools and extensions
98
+
99
99
*[Use HDInsight Tools Plugin for IntelliJ IDEA to create and submit Spark Scala applications](apache-spark-intellij-tool-plugin.md)
100
100
*[Use HDInsight Tools Plugin for IntelliJ IDEA to debug Apache Spark applications remotely](apache-spark-intellij-tool-plugin-debug-jobs-remotely.md)
101
101
*[Use Apache Zeppelin notebooks with an Apache Spark cluster on HDInsight](apache-spark-zeppelin-notebook.md)
@@ -104,14 +104,6 @@ To use the Microsoft Cognitive Toolkit with the Azure HDInsight Spark cluster, y
104
104
*[Install Jupyter on your computer and connect to an HDInsight Spark cluster](apache-spark-jupyter-notebook-install-locally.md)
105
105
106
106
### Manage resources
107
+
107
108
*[Manage resources for the Apache Spark cluster in Azure HDInsight](apache-spark-resource-manager.md)
108
109
*[Track and debug jobs running on an Apache Spark cluster in HDInsight](apache-spark-job-debugging.md)
0 commit comments