Skip to content

Commit 8615b0e

Browse files
authored
Removed mentions of Hadoop context
1 parent 0045b69 commit 8615b0e

File tree

1 file changed

+2
-5
lines changed

1 file changed

+2
-5
lines changed

articles/hdinsight/r-server/r-server-hdinsight-manage.md

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ Note also that the newly added users do not have root privileges in Linux system
7474

7575
## Connect remotely to Microsoft ML Services
7676

77-
You can set up access to the HDInsight Hadoop Spark compute context from a remote instance of ML Client running on your desktop. To do so, you must specify the options (hdfsShareDir, shareDir, sshUsername, sshHostname, sshSwitches, and sshProfileScript) when defining the RxSpark compute context on your desktop: For example:
77+
You can set up access to the HDInsight Spark compute context from a remote instance of ML Client running on your desktop. To do so, you must specify the options (hdfsShareDir, shareDir, sshUsername, sshHostname, sshSwitches, and sshProfileScript) when defining the RxSpark compute context on your desktop: For example:
7878

7979
myNameNode <- "default"
8080
myPort <- 0
@@ -220,16 +220,13 @@ A compute context allows you to control whether computation is performed locally
220220
summary(modelSpark)
221221

222222

223-
> [!NOTE]
224-
> You can also use MapReduce to distribute computation across cluster nodes. For more information on compute context, see [Compute context options for ML Services cluster on HDInsight](r-server-compute-contexts.md).
225-
226223
## Distribute R code to multiple nodes
227224

228225
With ML Services on HDInsight, you can take existing R code and run it across multiple nodes in the cluster by using `rxExec`. This function is useful when doing a parameter sweep or simulations. The following code is an example of how to use `rxExec`:
229226

230227
rxExec( function() {Sys.info()["nodename"]}, timesToRun = 4 )
231228

232-
If you are still using the Spark or MapReduce context, this command returns the nodename value for the worker nodes that the code `(Sys.info()["nodename"])` is run on. For example, on a four node cluster, you expect to receive output similar to the following snippet:
229+
If you are still using the Spark context, this command returns the nodename value for the worker nodes that the code `(Sys.info()["nodename"])` is run on. For example, on a four node cluster, you expect to receive output similar to the following snippet:
233230

234231
$rxElem1
235232
nodename

0 commit comments

Comments
 (0)