Skip to content

Commit a2929de

Browse files
authored
Merge pull request #95274 from dagiro/freshness45
freshness45
2 parents 80a1662 + c04f78d commit a2929de

File tree

1 file changed

+9
-5
lines changed

1 file changed

+9
-5
lines changed

articles/hdinsight/spark/apache-spark-jupyter-notebook-install-locally.md

Lines changed: 9 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.reviewer: jasonh
77
ms.service: hdinsight
88
ms.custom: hdinsightactive
99
ms.topic: conceptual
10-
ms.date: 06/06/2019
10+
ms.date: 11/07/2019
1111
---
1212

1313
# Install Jupyter notebook on your computer and connect to Apache Spark on HDInsight
@@ -25,9 +25,9 @@ For more information about the custom kernels and the Spark magic available for
2525

2626
## Prerequisites
2727

28-
The prerequisites listed here are not for installing Jupyter. These are for connecting the Jupyter notebook to an HDInsight cluster once the notebook is installed.
28+
* An Apache Spark cluster on HDInsight. For instructions, see [Create Apache Spark clusters in Azure HDInsight](apache-spark-jupyter-spark-sql.md). This is a prerequisite for connecting the Jupyter notebook to an HDInsight cluster once the notebook is installed.
2929

30-
* An Apache Spark cluster on HDInsight. For instructions, see [Create Apache Spark clusters in Azure HDInsight](apache-spark-jupyter-spark-sql.md).
30+
* Familiarity with using Jupyter Notebooks with Spark on HDInsight.
3131

3232
## Install Jupyter notebook on your computer
3333

@@ -41,7 +41,7 @@ Download the [Anaconda installer](https://www.anaconda.com/download/) for your p
4141

4242
|Cluster version | Install command |
4343
|---|---|
44-
|v3.6 and v3.5 |`pip install sparkmagic==0.12.7`|
44+
|v3.6 and v3.5 |`pip install sparkmagic==0.13.1`|
4545
|v3.4|`pip install sparkmagic==0.2.3`|
4646

4747
1. Ensure `ipywidgets` is properly installed by running the following command:
@@ -111,6 +111,10 @@ In this section, you configure the Spark magic that you installed earlier to con
111111
"url": "https://{CLUSTERDNSNAME}.azurehdinsight.net/livy"
112112
},
113113
114+
"custom_headers" : {
115+
"X-Requested-By": "livy"
116+
},
117+
114118
"heartbeat_refresh_seconds": 5,
115119
"livy_server_heartbeat_timeout_seconds": 60,
116120
"heartbeat_retry_seconds": 1
@@ -165,7 +169,7 @@ There can be a number of reasons why you might want to install Jupyter on your c
165169
* With the notebooks available locally, you can connect to different Spark clusters based on your application requirement.
166170
* You can use GitHub to implement a source control system and have version control for the notebooks. You can also have a collaborative environment where multiple users can work with the same notebook.
167171
* You can work with notebooks locally without even having a cluster up. You only need a cluster to test your notebooks against, not to manually manage your notebooks or a development environment.
168-
* It may be easier to configure your own local development environment than it is to configure the Jupyter installation on the cluster. You can take advantage of all the software you have installed locally without configuring one or more remote clusters.
172+
* It may be easier to configure your own local development environment than it's to configure the Jupyter installation on the cluster. You can take advantage of all the software you've installed locally without configuring one or more remote clusters.
169173
170174
> [!WARNING]
171175
> With Jupyter installed on your local computer, multiple users can run the same notebook on the same Spark cluster at the same time. In such a situation, multiple Livy sessions are created. If you run into an issue and want to debug that, it will be a complex task to track which Livy session belongs to which user.

0 commit comments

Comments
 (0)