Skip to content

Commit b6103bc

Browse files
authored
Update apache-spark-connect-to-sql-database.md
1 parent 5b1bac3 commit b6103bc

File tree

1 file changed

+7
-1
lines changed

1 file changed

+7
-1
lines changed

articles/hdinsight/spark/apache-spark-connect-to-sql-database.md

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,18 +30,24 @@ Start by creating a Jupyter Notebook associated with the Spark cluster. You use
3030
1. From the [Azure portal](https://portal.azure.com/), open your cluster.
3131
1. Select **Jupyter Notebook** underneath **Cluster dashboards** on the right side. If you don't see **Cluster dashboards**, select **Overview** from the left menu. If prompted, enter the admin credentials for the cluster.
3232

33+
:::image type="content" source="./media/apache-spark-connect-to-sql-database/new-hdinsight-spark-cluster-dashboard-jupyter-notebook.png " alt-text="Jupyter Notebook on Apache Spark" border="true":::
34+
3335
> [!NOTE]
3436
> You can also access the Jupyter Notebook on Spark cluster by opening the following URL in your browser. Replace **CLUSTERNAME** with the name of your cluster:
3537
>
3638
> `https://CLUSTERNAME.azurehdinsight.net/jupyter`
3739
1. In the Jupyter Notebook, from the top-right corner, click **New**, and then click **Spark** to create a Scala notebook. Jupyter Notebooks on HDInsight Spark cluster also provide the **PySpark** kernel for Python2 applications, and the **PySpark3** kernel for Python3 applications. For this article, we create a Scala notebook.
3840

41+
:::image type="content" source="./media/apache-spark-connect-to-sql-database/new-kernel-jupyter-notebook-on-spark.png " alt-text="Kernels for Jupyter Notebook on Spark" border="true":::
42+
43+
For more information about the kernels, see [Use Jupyter Notebook kernels with Apache Spark clusters in HDInsight](apache-spark-jupyter-notebook-kernels.md).
44+
3945
> [!NOTE]
4046
> In this article, we use a Spark (Scala) kernel because streaming data from Spark into SQL Database is only supported in Scala and Java currently. Even though reading from and writing into SQL can be done using Python, for consistency in this article, we use Scala for all three operations.
4147
4248
1. A new notebook opens with a default name, **Untitled**. Click the notebook name and enter a name of your choice.
4349

44-
![](media/apache-spark-connect-to-sql-database/new-hdinsight-spark-jupyter-notebook-name.png)
50+
:::image type="content" source="./media/apache-spark-connect-to-sql-database/new-hdinsight-spark-jupyter-notebook-name.png " alt-text="Provide a name for the notebook" border="true":::
4551

4652
You can now start creating your application.
4753

0 commit comments

Comments
 (0)