You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/hdinsight/hadoop/hdinsight-use-mapreduce.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ description: Learn how to run Apache MapReduce jobs on Apache Hadoop in HDInsigh
4
4
ms.service: hdinsight
5
5
ms.topic: how-to
6
6
ms.custom: hdinsightactive
7
-
ms.date: 12/21/2022
7
+
ms.date: 01/04/2024
8
8
---
9
9
10
10
# Use MapReduce in Apache Hadoop on HDInsight
@@ -13,7 +13,7 @@ Learn how to run MapReduce jobs on HDInsight clusters.
13
13
14
14
## Example data
15
15
16
-
HDInsight provides various example data sets, which are stored in the `/example/data` and `/HdiSamples` directory. These directories are in the default storage for your cluster. In this document, we use the `/example/data/gutenberg/davinci.txt` file. This file contains the notebooks of Leonardo da Vinci.
16
+
HDInsight provides various example data sets, which are stored in the `/example/data` and `/HdiSamples` directory. These directories are in the default storage for your cluster. In this document, we use the `/example/data/gutenberg/davinci.txt` file. This file contains the notebooks of `Leonardo da Vinci`.
17
17
18
18
## Example MapReduce
19
19
@@ -101,8 +101,8 @@ HDInsight can run HiveQL jobs by using various methods. Use the following table
101
101
102
102
|**Use this**... |**...to do this**| ...from this **client operating system**|
103
103
|:--- |:--- |:--- |:--- |
104
-
|[SSH](apache-hadoop-use-mapreduce-ssh.md)|Use the Hadoop command through **SSH**|Linux, Unix, Mac OS X, or Windows |
105
-
|[Curl](apache-hadoop-use-mapreduce-curl.md)|Submit the job remotely by using **REST**|Linux, Unix, Mac OS X, or Windows |
104
+
|[SSH](apache-hadoop-use-mapreduce-ssh.md)|Use the Hadoop command through **SSH**|Linux, Unix, `MacOS X`, or Windows |
105
+
|[Curl](apache-hadoop-use-mapreduce-curl.md)|Submit the job remotely by using **REST**|Linux, Unix, `MacOS X`, or Windows |
106
106
|[Windows PowerShell](apache-hadoop-use-mapreduce-powershell.md)|Submit the job remotely by using **Windows PowerShell**|Windows |
0 commit comments