Skip to content

Commit ccedcd4

Browse files
committed
Update Links to documentation
Signed-off-by: Paul Dubs <[email protected]>
1 parent beb41ab commit ccedcd4

File tree

22 files changed

+50
-57
lines changed

22 files changed

+50
-57
lines changed

cuda-specific-examples/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
## Eclipse Deeplearning4j: CUDA Specific Examples
22

3-
Switching from a CPU only backend to a GPU backend is as simple as changing one dependency - one line in the pom.xml file for Maven users. Instead of specifying the nd4j-native-platform module specify the nd4j-cuda-X-platform where X indicated the version of CUDA. It is recommended to install cuDNN for better GPU performance. Runs will log warnings if cuDNN is not found. For more information, please refer to documentation [here](https://deeplearning4j.org/docs/latest/deeplearning4j-config-cudnn)
3+
Switching from a CPU only backend to a GPU backend is as simple as changing one dependency - one line in the pom.xml file for Maven users. Instead of specifying the nd4j-native-platform module specify the nd4j-cuda-X-platform where X indicated the version of CUDA. It is recommended to install cuDNN for better GPU performance. Runs will log warnings if cuDNN is not found. For more information, please refer to documentation [here](https://deeplearning4j.konduit.ai/config/backends/config-cudnn#using-deeplearning-4-j-with-cudnn)
44

5-
Users with acces to multiple gpus systems can use DL4J to further speed up the training process by training the models in parallel on them. Ideally these GPUs have the same speed and networking capabilities. This project contains a set of examples that demonstrate how to leverage performance from a multiple gpus setup. More documentation can be found [here](https://deeplearning4j.konduit.ai/getting-started/tutorials/using-multiple-gpus)
5+
Users with access to systems with multiple gpus can use DL4J to further speed up the training process by training the models in parallel on them. Ideally these GPUs have the same speed and networking capabilities. This project contains a set of examples that demonstrate how to leverage performance from a multiple gpus setup. More documentation can be found [here](https://deeplearning4j.konduit.ai/getting-started/tutorials/using-multiple-gpus)
66

77
[Go back](../README.md) to the main repository page to explore other features/functionality of the **Eclipse Deeplearning4J** ecosystem. File an issue [here](https://github.com/eclipse/deeplearning4j-examples/issues) to request new features.
88

cuda-specific-examples/src/main/java/org/deeplearning4j/examples/multigpu/advanced/charmodelling/CharacterIterator.java

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -163,7 +163,6 @@ public DataSet next(int num) {
163163
// dimension 0 = number of examples in minibatch
164164
// dimension 1 = size of each vector (i.e., number of characters)
165165
// dimension 2 = length of each time series/example
166-
//Why 'f' order here? See http://deeplearning4j.org/usingrnns.html#data section "Alternative: Implementing a custom DataSetIterator"
167166
INDArray input = Nd4j.create(new int[]{currMinibatchSize,validCharacters.length,exampleLength}, 'f');
168167
INDArray labels = Nd4j.create(new int[]{currMinibatchSize,validCharacters.length,exampleLength}, 'f');
169168

cuda-specific-examples/src/main/java/org/deeplearning4j/examples/multigpu/advanced/charmodelling/GenerateTxtModel.java

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -58,9 +58,7 @@
5858
from Project Gutenberg. Training on other text sources should be relatively easy to implement.
5959
6060
For more details on RNNs in DL4J, see the following:
61-
http://deeplearning4j.org/usingrnns
62-
http://deeplearning4j.org/lstm
63-
http://deeplearning4j.org/recurrentnetwork
61+
https://deeplearning4j.konduit.ai/models/recurrent
6462
*/
6563
public class GenerateTxtModel {
6664
public static void main( String[] args ) throws Exception {

cuda-specific-examples/src/main/java/org/deeplearning4j/examples/multigpu/advanced/transferlearning/vgg16/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
##### TransferLearning
2-
Demonstrates use of the dl4j transfer learning API which allows users to construct a model based off an existing model by modifying the architecture, freezing certain parts selectively and then fine tuning parameters. Read the documentation for the Transfer Learning API at [https://deeplearning4j.org/transfer-learning](https://deeplearning4j.org/transfer-learning).
2+
Demonstrates use of the dl4j transfer learning API which allows users to construct a model based off an existing model by modifying the architecture, freezing certain parts selectively and then fine tuning parameters. Read the documentation for the Transfer Learning API at [https://deeplearning4j.konduit.ai/tuning-and-training/transfer-learning](https://deeplearning4j.konduit.ai/tuning-and-training/transfer-learning).
33

44
More more examples refer to the section in the dl4j-example repo [here](../../../../../../../../../../../dl4j-examples/src/main/java/org/deeplearning4j/examples/advanced/features/transferlearning/README.md)
55

dl4j-distributed-training-examples/scripts/patentExampleTrain.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ NUM_NODES=... #Number of nodes in the cluster
3131
SPARKSUBMIT=/opt/spark/bin/spark-submit
3232
MASTER_PORT=7077 #Port for the spark master. Default is 7077
3333
MINIBATCH=32 #Minibatch size for preprocessed datasets
34-
# For memory config, see https://deeplearning4j.org/memory
34+
# For memory config, see https://deeplearning4j.konduit.ai/config/config-memory
3535
JAVA_HEAP_MEM=10G
3636
OFFHEAP_MEM_JAVACPP=20G
3737
OFFHEAP_JAVACPP_MAX_PHYS=30G

dl4j-distributed-training-examples/src/main/java/org/deeplearning4j/distributedtrainingexamples/patent/README.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ Number of classes: 398
1515
Number of documents/examples (after preprocessing): approx. 5.7 million (training set) plus approx. 170000 (test set)
1616

1717
Dataset size: approx. 86 GB (zip format), 464 GB raw text. Note the example performs preprocessing from the compressed ZIP format.
18-
Requires an additional 20GB of storage space for preprocessing
18+
Requires an additional 20GB of storage space for preprocessing
1919

2020
**Neural Network**: a CNN classifier for text classification. Approximately 600,000 parameters
2121

@@ -70,7 +70,7 @@ MASTER_IP=...
7070
AZURE_STORAGE_ACCT=...
7171
AZURE_STORAGE_ACCT_KEY=...
7272
AZURE_CONTAINER_ZIPS=patentzips
73-
AZURE_CONTAINER_PREPROC=patentExamplePreproc
73+
AZURE_CONTAINER_PREPROC=patentExamplePreproc
7474
```
7575

7676
Note that some clusters may have the master already configured.
@@ -93,7 +93,7 @@ is pointed to the same value for ```AZURE_CONTAINER_PREPROC```.
9393
**Alternatively to setting storage account**
9494

9595
You can set the storage account credentials in your Hadoop core-site.xml file. See "Configuring Credentials" in this guide for details: [https://hadoop.apache.org/docs/current/hadoop-azure/index.html](https://hadoop.apache.org/docs/current/hadoop-azure/index.html)
96-
96+
9797

9898
**Second: Run the Script**
9999

@@ -108,7 +108,7 @@ After preprocessing is complete, you will have:
108108
2. For HTTP access (if enabled): ```https://AZURE_STORAGE_ACCT.blob.core.windows.net/AZURE_CONTAINER_ZIPS/```
109109
2. Preprocessed training and test data (with default sequence length of 1000 and minibatch size of 32)
110110
1. For Spark access: ```wasbs://AZURE_CONTAINER_PREPROC@AZURE_STORAGE_ACCT.blob.core.windows.net/seqLength1000_mb32/```
111-
2. For HTTP access (if enabled): ```https://AZURE_STORAGE_ACCT.blob.core.windows.net/AZURE_CONTAINER_PREPROC/```
111+
2. For HTTP access (if enabled): ```https://AZURE_STORAGE_ACCT.blob.core.windows.net/AZURE_CONTAINER_PREPROC/```
112112

113113
Note that the preprocessed directory will have ```train``` and ```test``` subdirectories.
114114
The format of the files in those train/test directories is a custom format designed to be loaded
@@ -131,7 +131,7 @@ Set the following required arguments to the same values used for the preprocessi
131131
MASTER_IP=...
132132
AZURE_STORAGE_ACCT=...
133133
AZURE_STORAGE_ACCT_KEY=...
134-
AZURE_CONTAINER_PREPROC=patentExamplePreproc
134+
AZURE_CONTAINER_PREPROC=patentExamplePreproc
135135
```
136136

137137
The following configuration options also need to be set:
@@ -142,7 +142,7 @@ LOCAL_SAVE_DIR
142142

143143
Your network mask should be set to the network used for spark communication. For example, [10.0.0.0/16]
144144
See the following links for further details:
145-
* [DL4J Distributed Training - Netmask](https://deeplearning4j.org/distributed#netmask)
145+
* [DL4J Distributed Training - Netmask](https://deeplearning4j.konduit.ai/distributed-deep-learning/parameter-server#netmask)
146146
* [How to Find the IP Address, Subnet Mask & Gateway of a Computer](https://yourbusiness.azcentral.com/ip-address-subnet-mask-gateway-computer-14563.html)
147147
* [What is a Subnet Mask](https://www.iplocation.net/subnet-mask)
148148

dl4j-distributed-training-examples/src/main/java/org/deeplearning4j/distributedtrainingexamples/tinyimagenet/TrainSpark.java

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@
7878
* a larger network, better selection of hyperparameters, and more epochs.
7979
*
8080
* For further details on DL4J's Spark implementation, see the "Distributed Deep Learning" pages at:
81-
* https://deeplearning4j.org/docs/latest/
81+
* https://deeplearning4j.konduit.ai/distributed-deep-learning/intro
8282
*
8383
* A local (single machine) version of this example is available in TrainLocal
8484
*
@@ -145,7 +145,7 @@ protected void entryPoint(String[] args) throws Exception {
145145
//Set up TrainingMaster for gradient sharing training
146146
VoidConfiguration voidConfiguration = VoidConfiguration.builder()
147147
.unicastPort(port) // Should be open for IN/OUT communications on all Spark nodes
148-
.networkMask(networkMask) // Local network mask - for example, 10.0.0.0/16 - see https://deeplearning4j.org/docs/latest/deeplearning4j-scaleout-parameter-server
148+
.networkMask(networkMask) // Local network mask - for example, 10.0.0.0/16 - see https://deeplearning4j.konduit.ai/distributed-deep-learning/parameter-server#netmask
149149
.controllerAddress(masterIP) // IP address of the master/driver node
150150
.meshBuildMode(MeshBuildMode.PLAIN)
151151
.build();

dl4j-examples/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -166,7 +166,7 @@ Trace where data from each example comes from and get metadata on prediction err
166166
Train a MultiLayerNetwork where the errors come from an external source, instead of using an Output layer and a labels array.
167167

168168
##### TransferLearning
169-
Demonstrates use of the dl4j transfer learning API which allows users to construct a model based off an existing model by modifying the architecture, freezing certain parts selectively and then fine tuning parameters. Read the documentation for the Transfer Learning API at [https://deeplearning4j.org/transfer-learning](https://deeplearning4j.org/transfer-learning).
169+
Demonstrates use of the dl4j transfer learning API which allows users to construct a model based off an existing model by modifying the architecture, freezing certain parts selectively and then fine tuning parameters. Read the documentation for the Transfer Learning API at [https://deeplearning4j.konduit.ai/tuning-and-training/transfer-learning](https://deeplearning4j.konduit.ai/tuning-and-training/transfer-learning).
170170
* [EditLastLayerOthersFrozen.java](./src/main/java/org/deeplearning4j/examples/advanced/features/transferlearning/editlastlayer/EditLastLayerOthersFrozen.java)
171171
Modifies just the last layer in vgg16, freezes the rest and trains the network on the flower dataset.
172172
* [FeaturizedPreSave.java](./src/main/java/org/deeplearning4j/examples/advanced/features/transferlearning/editlastlayer/presave/FeaturizedPreSave.java) & [FitFromFeaturized.java](./src/main/java/org/deeplearning4j/examples/advanced/features/transferlearning/editlastlayer/presave/FitFromFeaturized.java)
Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
11
##### TransferLearning
2-
Demonstrates use of the dl4j transfer learning API which allows users to construct a model based off an existing model by modifying the architecture, freezing certain parts selectively and then fine tuning parameters. Read the documentation for the Transfer Learning API at [https://deeplearning4j.org/transfer-learning](https://deeplearning4j.org/transfer-learning).
3-
* [EditLastLayerOthersFrozen.java](./editlastlayer/EditLastLayerOthersFrozen.java)
2+
Demonstrates use of the dl4j transfer learning API which allows users to construct a model based off an existing model by modifying the architecture, freezing certain parts selectively and then fine tuning parameters. Read the documentation for the Transfer Learning API at [https://deeplearning4j.konduit.ai/tuning-and-training/transfer-learning](https://deeplearning4j.konduit.ai/tuning-and-training/transfer-learning).
3+
* [EditLastLayerOthersFrozen.java](./editlastlayer/EditLastLayerOthersFrozen.java)
44
Modifies just the last layer in vgg16, freezes the rest and trains the network on the flower dataset.
55
* [FeaturizedPreSave.java](./editlastlayer/presave/FeaturizedPreSave.java) & [FitFromFeaturized.java](./editlastlayer/presave/FitFromFeaturized.java)
66
Save time on the forward pass during multiple epochs by "featurizing" the datasets. FeaturizedPreSave saves the output at the last frozen layer and FitFromFeaturize fits to the presaved data so you can iterate quicker with different learning parameters.
7-
* [EditAtBottleneckOthersFrozen.java](./editfrombottleneck/EditAtBottleneckOthersFrozen.java)
7+
* [EditAtBottleneckOthersFrozen.java](./editfrombottleneck/EditAtBottleneckOthersFrozen.java)
88
A more complex example of modifying model architecure by adding/removing vertices
9-
* [FineTuneFromBlockFour.java](./finetuneonly/FineTuneFromBlockFour.java)
9+
* [FineTuneFromBlockFour.java](./finetuneonly/FineTuneFromBlockFour.java)
1010
Reads in a saved model (training information and all) and fine tunes it by overriding its training information with what is specified

dl4j-examples/src/main/java/org/deeplearning4j/examples/advanced/modelling/charmodelling/generatetext/GenerateTxtCharCompGraphModel.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@
3838
/**
3939
* This example is almost identical to the LSTMCharModellingExample, except that it utilizes the ComputationGraph
4040
* architecture instead of MultiLayerNetwork architecture. See the javadoc in that example for details.
41-
* For more details on the ComputationGraph architecture, see http://deeplearning4j.org/compgraph
41+
* For more details on the ComputationGraph architecture, see https://deeplearning4j.konduit.ai/models/computationgraph
4242
*
4343
* In addition to the use of the ComputationGraph a, this version has skip connections between the first and output layers,
4444
* in order to show how this configuration is done. In practice, this means we have the following types of connections:

0 commit comments

Comments
 (0)