You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Nov 17, 2023. It is now read-only.
Copy file name to clipboardExpand all lines: docs/get_started/install.md
+240-9Lines changed: 240 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -648,8 +648,200 @@ You could also run distributed deeplearning with *MXNet* on AWS using [Cloudform
648
648
649
649
<!-- END - Cloud Python Installation Instructions -->
650
650
651
+
652
+
<!-- START - MacOS R CPU Installation Instructions -->
653
+
654
+
<divclass="macos">
655
+
<divclass="r">
656
+
<div class="cpu">
657
+
658
+
The CPU version of MXNet R package can be installed in R like other packages
659
+
660
+
```r
661
+
install.packages("drat")
662
+
drat::addRepo("dmlc")
663
+
install.packages("mxnet")
664
+
```
665
+
666
+
667
+
</div>
668
+
669
+
670
+
<divclass="gpu">
671
+
672
+
Will be available soon.
673
+
674
+
</div>
675
+
676
+
</div>
677
+
</div>
678
+
<!-- END - MacOS R CPU Installation Instructions -->
679
+
680
+
681
+
<divclass="linux">
682
+
<divclass="r">
683
+
<div class="cpu">
684
+
<br/>
685
+
686
+
Building *MXNet* from source is a 2 step process.
687
+
1. Build the *MXNet* core shared library, `libmxnet.so`, from the C++ sources.
688
+
2. Build the language specific bindings.
689
+
690
+
**Minimum Requirements**
691
+
1.[GCC 4.8](https://gcc.gnu.org/gcc-4.8/) or later to compile C++ 11.
692
+
2.[GNU Make](https://www.gnu.org/software/make/)
693
+
694
+
<br/>
695
+
696
+
**Build the MXNet core shared library**
697
+
698
+
**Step 1** Install build tools and git.
699
+
```bash
700
+
$ sudo apt-get update
701
+
$ sudo apt-get install -y build-essential git
702
+
```
703
+
704
+
**Step 2** Install OpenBLAS.
705
+
706
+
*MXNet* uses [BLAS](https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms) library for accelerated numerical computations on CPU machine. There are several flavors of BLAS libraries - [OpenBLAS](http://www.openblas.net/), [ATLAS](http://math-atlas.sourceforge.net/) and [MKL](https://software.intel.com/en-us/intel-mkl). In this step we install OpenBLAS. You can choose to install ATLAS or MKL.
707
+
```bash
708
+
$ sudo apt-get install -y libopenblas-dev
709
+
```
710
+
711
+
**Step 3** Install OpenCV.
712
+
713
+
*MXNet* uses [OpenCV](http://opencv.org/) for efficient image loading and augmentation operations.
*Note* - USE_OPENCV and USE_BLAS are make file flags to set compilation options to use OpenCV and BLAS library. You can explore and use more compilation options in `make/config.mk`.
727
+
728
+
<br/>
729
+
730
+
**Build and install the MXNet R binding**
731
+
732
+
733
+
```bash
734
+
$ make rpkg
735
+
$ R CMD INSTALL mxnet_current_r.tar.gz
736
+
```
737
+
738
+
739
+
</div>
740
+
741
+
<divclass="gpu">
742
+
743
+
The following installation instructions have been tested on Ubuntu 14.04 and 16.04.
744
+
745
+
746
+
**Prerequisites**
747
+
748
+
Install the following NVIDIA libraries to setup *MXNet* with GPU support:
749
+
750
+
1. Install CUDA 8.0 following the NVIDIA's [installation guide](http://docs.nvidia.com/cuda/cuda-installation-guide-linux/).
751
+
2. Install cuDNN 5 for CUDA 8.0 following the NVIDIA's [installation guide](https://developer.nvidia.com/cudnn). You may need to register with NVIDIA for downloading the cuDNN library.
752
+
753
+
**Note:** Make sure to add CUDA install path to `LD_LIBRARY_PATH`.
754
+
755
+
Example - *export LD_LIBRARY_PATH=/usr/local/cuda/lib64/:$LD_LIBRARY_PATH*
756
+
757
+
<br/>
758
+
759
+
Building *MXNet* from source is a 2 step process.
760
+
1. Build the *MXNet* core shared library, `libmxnet.so`, from the C++ sources.
761
+
2. Build the language specific bindings.
762
+
763
+
**Minimum Requirements**
764
+
1.[GCC 4.8](https://gcc.gnu.org/gcc-4.8/) or later to compile C++ 11.
765
+
2.[GNU Make](https://www.gnu.org/software/make/)
766
+
767
+
<br/>
768
+
769
+
**Build the MXNet core shared library**
770
+
771
+
**Step 1** Install build tools and git.
772
+
```bash
773
+
$ sudo apt-get update
774
+
$ sudo apt-get install -y build-essential git
775
+
```
776
+
**Step 2** Install OpenBLAS.
777
+
778
+
*MXNet* uses [BLAS](https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms) library for accelerated numerical computations. There are several flavors of BLAS libraries - [OpenBLAS](http://www.openblas.net/), [ATLAS](http://math-atlas.sourceforge.net/) and [MKL](https://software.intel.com/en-us/intel-mkl). In this step we install OpenBLAS. You can choose to install ATLAS or MKL.
779
+
```bash
780
+
$ sudo apt-get install -y libopenblas-dev
781
+
```
782
+
783
+
**Step 3** Install OpenCV.
784
+
785
+
*MXNet* uses [OpenCV](http://opencv.org/) for efficient image loading and augmentation operations.
$ make -j $(nproc) USE_OPENCV=1 USE_BLAS=openblas USE_CUDA=1 USE_CUDA_PATH=/usr/local/cuda USE_CUDNN=1
796
+
```
797
+
798
+
*Note* - USE_OPENCV, USE_BLAS, USE_CUDA, USE_CUDA_PATH AND USE_CUDNN are make file flags to set compilation options to use OpenCV, OpenBLAS, CUDA and cuDNN libraries. You can explore and use more compilation options in `make/config.mk`. Make sure to set USE_CUDA_PATH to right CUDA installation path. In most cases it is - */usr/local/cuda*.
799
+
800
+
<br/>
801
+
802
+
**Build and install the MXNet R binding**
803
+
804
+
```bash
805
+
$ make rpkg
806
+
$ R CMD INSTALL mxnet_current_r.tar.gz
807
+
```
808
+
809
+
</div>
810
+
811
+
</div>
812
+
</div>
813
+
814
+
815
+
<!-- START - Windows R CPU Installation Instructions -->
816
+
817
+
<divclass="windows">
818
+
<divclass="r">
819
+
<div class="cpu">
820
+
821
+
The CPU version of MXNet R package can be installed in R like other packages
822
+
823
+
824
+
```r
825
+
install.packages("drat")
826
+
drat::addRepo("dmlc")
827
+
install.packages("mxnet")
828
+
```
829
+
830
+
831
+
</div>
832
+
833
+
<divclass="gpu">
834
+
835
+
Will be available soon.
836
+
837
+
</div>
838
+
</div>
839
+
</div>
840
+
841
+
<!-- END - Windows R CPU Installation Instructions -->
842
+
651
843
<divclass="linux">
652
-
<divclass="scala r julia perl">
844
+
<divclass="scala julia perl">
653
845
<div class="cpu gpu">
654
846
655
847
Follow the installation instructions [in this guide](./ubuntu_setup.md) to set up MXNet.
@@ -659,7 +851,7 @@ Follow the installation instructions [in this guide](./ubuntu_setup.md) to set u
659
851
</div>
660
852
661
853
<divclass="macos">
662
-
<divclass="scala r julia perl">
854
+
<divclass="scala julia perl">
663
855
<div class="cpu gpu">
664
856
665
857
Follow the installation instructions [in this guide](./osx_setup.md) to set up MXNet.
@@ -669,8 +861,8 @@ Follow the installation instructions [in this guide](./osx_setup.md) to set up M
669
861
</div>
670
862
671
863
<divclass="windows">
672
-
<divclass="python scala r julia perl">
673
-
<div class="cpu gpu">
864
+
<divclass="python scala julia perl">
865
+
<div class="gpu">
674
866
675
867
Follow the installation instructions [in this guide](./windows_setup.md) to set up MXNet.
676
868
@@ -1068,7 +1260,7 @@ Start the python terminal.
1068
1260
```bash
1069
1261
$ python
1070
1262
```
1071
-
<!-- Example code for CPU -->
1263
+
<!-- Example Python code for CPU -->
1072
1264
1073
1265
<divclass="cpu">
1074
1266
@@ -1092,7 +1284,7 @@ $
1092
1284
1093
1285
</div>
1094
1286
1095
-
<!-- Example code for CPU -->
1287
+
<!-- Example Python code for CPU -->
1096
1288
1097
1289
<divclass="gpu">
1098
1290
@@ -1111,8 +1303,47 @@ array([[ 3., 3., 3.],
1111
1303
1112
1304
</div>
1113
1305
1306
+
<!-- Example R code for CPU -->
1307
+
1308
+
<divclass="linux macos windows">
1309
+
<divclass="r">
1310
+
<div class="cpu">
1311
+
1312
+
Run a short *MXNet* python program to create a 2X3 matrix of ones, multiply each element in the matrix by 2 followed by adding 1. We expect the output to be a 2X3 matrix with all elements being 3.
1313
+
1314
+
```r
1315
+
library(mxnet)
1316
+
a<- mx.nd.ones(c(2,3), ctx= mx.cpu())
1317
+
b<-a*2+1
1318
+
b
1319
+
```
1320
+
1321
+
</div>
1322
+
</div>
1323
+
</div>
1324
+
1325
+
<!-- Example R code for GPU -->
1326
+
1327
+
<divclass="linux macos windows">
1328
+
<divclass="r">
1329
+
<div class="gpu">
1330
+
1331
+
Run a short *MXNet* python program to create a 2X3 matrix of ones *a* on a *GPU*, multiply each element in the matrix by 2 followed by adding 1. We expect the output to be a 2X3 matrix with all elements being 3. We use *mx.gpu()*, to set *MXNet* context to be GPUs.
0 commit comments