You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/hdinsight/hdinsight-hadoop-access-yarn-app-logs-linux.md
+95-8Lines changed: 95 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,9 +5,9 @@ author: hrasheed-msft
5
5
ms.author: hrasheed
6
6
ms.reviewer: jasonh
7
7
ms.service: hdinsight
8
-
ms.custom: hdinsightactive
9
8
ms.topic: conceptual
10
-
ms.date: 11/15/2019
9
+
ms.custom: hdinsightactive
10
+
ms.date: 01/23/2020
11
11
---
12
12
13
13
# Access Apache Hadoop YARN application logs on Linux-based HDInsight
@@ -22,7 +22,7 @@ Each application may consist of multiple *application attempts*. If an applicati
22
22
23
23
To scale your cluster to support greater processing throughput, you can use [Autoscale](hdinsight-autoscale-clusters.md) or [Scale your clusters manually using a few different languages](hdinsight-scaling-best-practices.md#utilities-to-scale-clusters).
24
24
25
-
## <aname="YARNTimelineServer"></a>YARN Timeline Server
25
+
## YARN Timeline Server
26
26
27
27
The [Apache Hadoop YARN Timeline Server](https://hadoop.apache.org/docs/r2.7.3/hadoop-yarn/hadoop-yarn-site/TimelineServer.html) provides generic information on completed applications
28
28
@@ -33,36 +33,123 @@ YARN Timeline Server includes the following type of data:
33
33
* Information on attempts made to complete the application
34
34
* The containers used by any given application attempt
35
35
36
-
## <aname="YARNAppsAndLogs"></a>YARN applications and logs
36
+
## YARN applications and logs
37
37
38
38
YARN supports multiple programming models ([Apache Hadoop MapReduce](https://hadoop.apache.org/docs/r1.2.1/mapred_tutorial.html) being one of them) by decoupling resource management from application scheduling/monitoring. YARN uses a global *ResourceManager* (RM), per-worker-node *NodeManagers* (NMs), and per-application *ApplicationMasters* (AMs). The per-application AM negotiates resources (CPU, memory, disk, network) for running your application with the RM. The RM works with NMs to grant these resources, which are granted as *containers*. The AM is responsible for tracking the progress of the containers assigned to it by the RM. An application may require many containers depending on the nature of the application.
39
39
40
40
Each application may consist of multiple *application attempts*. If an application fails, it may be retried as a new attempt. Each attempt runs in a container. In a sense, a container provides the context for basic unit of work performed by a YARN application. All work that is done within the context of a container is performed on the single worker node on which the container was allocated. See [Apache Hadoop YARN Concepts](https://hadoop.apache.org/docs/r2.7.4/hadoop-yarn/hadoop-yarn-site/WritingYarnApplications.html) for further reference.
41
41
42
42
Application logs (and the associated container logs) are critical in debugging problematic Hadoop applications. YARN provides a nice framework for collecting, aggregating, and storing application logs with the [Log Aggregation](https://hortonworks.com/blog/simplifying-user-logs-management-and-access-in-yarn/) feature. The Log Aggregation feature makes accessing application logs more deterministic. It aggregates logs across all containers on a worker node and stores them as one aggregated log file per worker node. The log is stored on the default file system after an application finishes. Your application may use hundreds or thousands of containers, but logs for all containers run on a single worker node are always aggregated to a single file. So there's only 1 log per worker node used by your application. Log Aggregation is enabled by default on HDInsight clusters version 3.0 and above. Aggregated logs are located in default storage for the cluster. The following path is the HDFS path to the logs:
43
43
44
-
/app-logs/<user>/logs/<applicationId>
44
+
```
45
+
/app-logs/<user>/logs/<applicationId>
46
+
```
45
47
46
48
In the path, `user` is the name of the user who started the application. The `applicationId` is the unique identifier assigned to an application by the YARN RM.
47
49
48
50
The aggregated logs aren't directly readable, as they're written in a [TFile](https://issues.apache.org/jira/secure/attachment/12396286/TFile%20Specification%2020081217.pdf), [binary format](https://issues.apache.org/jira/browse/HADOOP-3315) indexed by container. Use the YARN ResourceManager logs or CLI tools to view these logs as plain text for applications or containers of interest.
49
51
52
+
## Yarn logs in an ESP cluster
53
+
54
+
Two configurations must be added to the custom `mapred-site` in Ambari.
55
+
56
+
1. From a web browser, navigate to `https://CLUSTERNAME.azurehdinsight.net`, where `CLUSTERNAME` is the name of your cluster.
57
+
58
+
1. From the Ambari UI, navigate to **MapReduce2** > **Configs** > **Advanced** > **Custom mapred-site**.
1. Save changes and restart all affected services.
76
+
50
77
## YARN CLI tools
51
78
52
-
To use the YARN CLI tools, you must first connect to the HDInsight cluster using SSH. For information, see [Use SSH with HDInsight](hdinsight-hadoop-linux-use-ssh-unix.md).
79
+
1. Use [ssh command](./hdinsight-hadoop-linux-use-ssh-unix.md) to connect to your cluster. Edit the command below by replacing CLUSTERNAME with the name of your cluster, and then enter the command:
Specify the <applicationId>, <user-who-started-the-application>, <containerId>, and <worker-node-address> information when running these commands.
114
+
115
+
### Other sample commands
116
+
117
+
1. Download Yarn containers logs for all application masters with the command below. This will create the log file named `amlogs.txt` in text format.
118
+
119
+
```bash
120
+
yarn logs -applicationId <application_id> -am ALL > amlogs.txt
121
+
```
122
+
123
+
1. Download Yarn container logs for only the latest application master with the following command:
58
124
59
-
Specify the <applicationId>, <user-who-started-the-application>, <containerId>, and <worker-node-address> information when running these commands.
0 commit comments