@@ -16,13 +16,13 @@ TensorLayer provides a simple way to create you own cost function. Take a MLP be
1616
1717.. code-block :: python
1818
19- network = tl. InputLayer(x, name = ' input_layer' )
20- network = tl. DropoutLayer(network, keep = 0.8 , name = ' drop1' )
21- network = tl. DenseLayer(network, n_units = 800 , act = tf.nn.relu, name = ' relu1' )
22- network = tl. DropoutLayer(network, keep = 0.5 , name = ' drop2' )
23- network = tl. DenseLayer(network, n_units = 800 , act = tf.nn.relu, name = ' relu2' )
24- network = tl. DropoutLayer(network, keep = 0.5 , name = ' drop3' )
25- network = tl. DenseLayer(network, n_units = 10 , act = tl.activation.identity, name = ' output_layer' )
19+ network = InputLayer(x, name = ' input_layer' )
20+ network = DropoutLayer(network, keep = 0.8 , name = ' drop1' )
21+ network = DenseLayer(network, n_units = 800 , act = tf.nn.relu, name = ' relu1' )
22+ network = DropoutLayer(network, keep = 0.5 , name = ' drop2' )
23+ network = DenseLayer(network, n_units = 800 , act = tf.nn.relu, name = ' relu2' )
24+ network = DropoutLayer(network, keep = 0.5 , name = ' drop3' )
25+ network = DenseLayer(network, n_units = 10 , act = tl.activation.identity, name = ' output_layer' )
2626
2727 The network parameters will be ``[W1, b1, W2, b2, W_out, b_out] ``,
2828then you can apply L2 regularization on the weights matrix of first two layer as follow.
0 commit comments