Skip to content

Commit b937343

Browse files
committed
freshness31
1 parent a96038c commit b937343

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

articles/hdinsight/spark/apache-spark-jupyter-notebook-kernels.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -6,14 +6,14 @@ author: hrasheed-msft
66
ms.author: hrasheed
77
ms.reviewer: jasonh
88
ms.service: hdinsight
9-
ms.custom: hdinsightactive,hdiseo17may2017
109
ms.topic: conceptual
11-
ms.date: 05/27/2019
10+
ms.custom: hdinsightactive,hdiseo17may2017
11+
ms.date: 03/20/2020
1212
---
1313

1414
# Kernels for Jupyter notebook on Apache Spark clusters in Azure HDInsight
1515

16-
HDInsight Spark clusters provide kernels that you can use with the Jupyter notebook on [Apache Spark](https://spark.apache.org/) for testing your applications. A kernel is a program that runs and interprets your code. The three kernels are:
16+
HDInsight Spark clusters provide kernels that you can use with the Jupyter notebook on [Apache Spark](./apache-spark-overview.md) for testing your applications. A kernel is a program that runs and interprets your code. The three kernels are:
1717

1818
- **PySpark** - for applications written in Python2.
1919
- **PySpark3** - for applications written in Python3.
@@ -53,7 +53,7 @@ Here are a few benefits of using the new kernels with Jupyter notebook on Spark
5353
- **sc** - for Spark context
5454
- **sqlContext** - for Hive context
5555

56-
So, you don't have to run statements like the following to set the contexts:
56+
So, you **don't** have to run statements like the following to set the contexts:
5757

5858
sc = SparkContext('yarn-client')
5959
sqlContext = HiveContext(sc)
@@ -119,7 +119,7 @@ The way notebooks are saved to the storage account is compatible with [Apache Ha
119119

120120
hdfs dfs -ls /HdiNotebooks # List everything at the root directory – everything in this directory is visible to Jupyter from the home page
121121
hdfs dfs –copyToLocal /HdiNotebooks # Download the contents of the HdiNotebooks folder
122-
hdfs dfs –copyFromLocal example.ipynb /HdiNotebooks # Upload a notebook example.ipynb to the root folder so its visible from Jupyter
122+
hdfs dfs –copyFromLocal example.ipynb /HdiNotebooks # Upload a notebook example.ipynb to the root folder so it's visible from Jupyter
123123

124124
Irrespective of whether the cluster uses Azure Storage or Azure Data Lake Storage as the default storage account, the notebooks are also saved on the cluster headnode at `/var/lib/jupyter`.
125125

@@ -131,7 +131,7 @@ Jupyter notebooks on Spark HDInsight clusters are supported only on Google Chrom
131131

132132
The new kernels are in evolving stage and will mature over time. This could also mean that APIs could change as these kernels mature. We would appreciate any feedback that you have while using these new kernels. This is useful in shaping the final release of these kernels. You can leave your comments/feedback under the **Feedback** section at the bottom of this article.
133133

134-
## <a name="seealso"></a>See also
134+
## See also
135135

136136
- [Overview: Apache Spark on Azure HDInsight](apache-spark-overview.md)
137137

0 commit comments

Comments
 (0)