Skip to content

Commit fa28acd

Browse files
authored
Merge pull request #57226 from kkampf/patch-1
Removed mentions of Hadoop context
2 parents 3d244f4 + e56d9dd commit fa28acd

File tree

1 file changed

+3
-6
lines changed

1 file changed

+3
-6
lines changed

articles/hdinsight/r-server/r-server-hdinsight-manage.md

Lines changed: 3 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.author: hrasheed
88
ms.reviewer: jasonh
99
ms.custom: hdinsightactive
1010
ms.topic: conceptual
11-
ms.date: 06/27/2018
11+
ms.date: 11/06/2018
1212
---
1313
# Manage ML Services cluster on Azure HDInsight
1414

@@ -74,7 +74,7 @@ Note also that the newly added users do not have root privileges in Linux system
7474

7575
## Connect remotely to Microsoft ML Services
7676

77-
You can set up access to the HDInsight Hadoop Spark compute context from a remote instance of ML Client running on your desktop. To do so, you must specify the options (hdfsShareDir, shareDir, sshUsername, sshHostname, sshSwitches, and sshProfileScript) when defining the RxSpark compute context on your desktop: For example:
77+
You can set up access to the HDInsight Spark compute context from a remote instance of ML Client running on your desktop. To do so, you must specify the options (hdfsShareDir, shareDir, sshUsername, sshHostname, sshSwitches, and sshProfileScript) when defining the RxSpark compute context on your desktop: For example:
7878

7979
myNameNode <- "default"
8080
myPort <- 0
@@ -220,16 +220,13 @@ A compute context allows you to control whether computation is performed locally
220220
summary(modelSpark)
221221

222222

223-
> [!NOTE]
224-
> You can also use MapReduce to distribute computation across cluster nodes. For more information on compute context, see [Compute context options for ML Services cluster on HDInsight](r-server-compute-contexts.md).
225-
226223
## Distribute R code to multiple nodes
227224

228225
With ML Services on HDInsight, you can take existing R code and run it across multiple nodes in the cluster by using `rxExec`. This function is useful when doing a parameter sweep or simulations. The following code is an example of how to use `rxExec`:
229226

230227
rxExec( function() {Sys.info()["nodename"]}, timesToRun = 4 )
231228

232-
If you are still using the Spark or MapReduce context, this command returns the nodename value for the worker nodes that the code `(Sys.info()["nodename"])` is run on. For example, on a four node cluster, you expect to receive output similar to the following snippet:
229+
If you are still using the Spark context, this command returns the nodename value for the worker nodes that the code `(Sys.info()["nodename"])` is run on. For example, on a four node cluster, you expect to receive output similar to the following snippet:
233230

234231
$rxElem1
235232
nodename

0 commit comments

Comments
 (0)