Skip to content

Commit 362be05

Browse files
committed
Resolving images
1 parent 701a437 commit 362be05

File tree

5 files changed

+0
-4
lines changed

5 files changed

+0
-4
lines changed

articles/hdinsight-aks/spark/use-machine-learning-notebook-on-spark.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -22,8 +22,6 @@ The following tutorial notebook shows an example of training machine learning mo
2222
1. Find your storage and container name in the portal JSON view
2323

2424
:::image type="content" source="./media/use-machine-learning-notebook-on-spark/json-view.png" alt-text="Screenshot showing JSON view." lightbox="./media/use-machine-learning-notebook-on-spark/json-view.png":::
25-
26-
:::image type="content" source="./media/use-machine-learning-notebook-on-spark/resource-json.png" alt-text="Screenshot showing resource JSON view." lightbox="./media/use-machine-learning-notebook-on-spark/resource-json.png":::
2725

2826
1. Navigate into your primary HDI storage>container>base folder> upload the [CSV](https://github.com/Azure-Samples/hdinsight-aks/blob/main/spark/iris_csv.csv)
2927

articles/hdinsight/hdinsight-log-management.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -97,8 +97,6 @@ log4j.logger.alerts=DEBUG,alerts
9797

9898
The next step is reviewing the job execution log files for the various services. Services could include Apache HBase, Apache Spark, and many others. A Hadoop cluster produces a large number of verbose logs, so determining which logs are useful (and which aren't) can be time-consuming. Understanding the logging system is important for targeted management of log files. The following image is an example log file.
9999

100-
:::image type="content" source="./media/hdinsight-log-management/hdi-log-file-example.png" alt-text="HDInsight example log file sample output.":::
101-
102100
### Access the Hadoop log files
103101

104102
HDInsight stores its log files both in the cluster file system and in Azure Storage. You can examine log files in the cluster by opening an [SSH](hdinsight-hadoop-linux-use-ssh-unix.md) connection to the cluster and browsing the file system, or by using the Hadoop YARN Status portal on the remote head node server. You can examine the log files in Azure Storage using any of the tools that can access and download data from Azure Storage. Examples are [AzCopy](../storage/common/storage-use-azcopy-v10.md), [CloudXplorer](https://clumsyleaf.com/products/cloudxplorer), and the Visual Studio Server Explorer. You can also use PowerShell and the Azure Storage Client libraries, or the Azure .NET SDKs, to access data in Azure blob storage.
-2.15 KB
Loading
Binary file not shown.

0 commit comments

Comments
 (0)