Skip to content

Commit f53fa1f

Browse files
authored
Merge branch 'master' into Release
2 parents 4e61852 + 9046b76 commit f53fa1f

File tree

4 files changed

+4
-14
lines changed

4 files changed

+4
-14
lines changed

README.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,8 @@ This project can also be found at [iHub](https://code.ihub.org.cn/projects/328)
2626

2727
# News
2828

29+
🔥 **3.0.0 will supports multiple backends, such as TensorFlow, MindSpore and more, allowing users to run the code on different hardware like Nvidia-GPU and Huawei-Ascend. We need more people to join the dev team, if you are interested, please email [email protected]**
30+
2931
🔥 Reinforcement Learning Zoo: [Low-level APIs](https://github.com/tensorlayer/tensorlayer/tree/master/examples/reinforcement_learning) for professional usage, [High-level APIs](https://github.com/tensorlayer/RLzoo) for simple usage, and a corresponding [Springer textbook](http://springer.com/gp/book/9789811540943)
3032

3133
🔥 [Sipeed Maxi-EMC](https://github.com/sipeed/Maix-EMC): Run TensorLayer models on the **low-cost AI chip** (e.g., K210) (Alpha Version)

examples/basic_tutorials/tutorial_cifar10_cnn_static.py

Lines changed: 1 addition & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -155,24 +155,16 @@ def _map_fn_test(img, target):
155155

156156
for epoch in range(n_epoch):
157157
start_time = time.time()
158-
159158
train_loss, train_acc, n_iter = 0, 0, 0
160159
for X_batch, y_batch in train_ds:
161160
net.train()
162-
163161
with tf.GradientTape() as tape:
164162
# compute outputs
165163
_logits = net(X_batch)
166164
# compute loss and update model
167-
_loss_ce = tl.cost.cross_entropy(_logits, y_batch, name='train_loss')
168-
_loss_L2 = 0
169-
# for p in tl.layers.get_variables_with_name('relu/W', True, True):
170-
# _loss_L2 += tl.cost.lo_regularizer(1.0)(p)
171-
_loss = _loss_ce + _loss_L2
172-
165+
_loss = tl.cost.cross_entropy(_logits, y_batch, name='train_loss')
173166
grad = tape.gradient(_loss, train_weights)
174167
optimizer.apply_gradients(zip(grad, train_weights))
175-
176168
train_loss += _loss
177169
train_acc += np.mean(np.equal(np.argmax(_logits, 1), y_batch))
178170
n_iter += 1

examples/reinforcement_learning/README.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,6 @@ A corresponding [Springer textbook](https://deepreinforcementlearningbook.org) i
4545
* tensorflow >= 2.0.0 or tensorflow-gpu >= 2.0.0a0
4646
* tensorlayer >= 2.0.1
4747
* tensorflow-probability
48-
* tf-nightly-2.0-preview
4948

5049
*** If you meet the error`AttributeError: module 'tensorflow' has no attribute 'contrib'` when running the code after installing tensorflow-probability, try:
5150

@@ -108,7 +107,6 @@ The pretrained models and learning curves for each algorithm are stored [here](h
108107
See David Silver RL Tutorial Lecture 5 - Q-Learning for more details.
109108
```
110109

111-
112110

113111
* **Deep Q-Network (DQN)**
114112

@@ -157,8 +155,6 @@ The pretrained models and learning curves for each algorithm are stored [here](h
157155
```
158156

159157

160-
```
161-
162158

163159

164160

tensorlayer/models/vgg.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@
8282

8383
model_urls = {
8484
'vgg16': 'http://www.cs.toronto.edu/~frossard/vgg16/',
85-
'vgg19': 'https://media.githubusercontent.com/media/tensorlayer/pretrained-models/master/models/'
85+
'vgg19': 'https://github.com/tensorlayer/pretrained-models/blob/master/models/vgg19.npy'
8686
}
8787

8888
model_saved_name = {'vgg16': 'vgg16_weights.npz', 'vgg19': 'vgg19.npy'}

0 commit comments

Comments
 (0)