Skip to content

Commit 5a055c7

Browse files
committed
release 0.5.0
1 parent 7fc6650 commit 5a055c7

File tree

6 files changed

+269
-221
lines changed

6 files changed

+269
-221
lines changed

README.md

Lines changed: 39 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
<a href="https://tensorlayerx.readthedocs.io/">
22
<div align="center">
3-
<img src="https://git.openi.org.cn/hanjr/tensorlayerx-image/raw/branch/master/tlx-LOGO-04.png" width="50%" height="30%"/>
3+
<img src="https://git.openi.org.cn/hanjr/tensorlayerx-image/raw/branch/master/tlx-LOGO--02.jpg" width="50%" height="30%"/>
44
</div>
55
</a>
66

@@ -18,18 +18,13 @@
1818

1919
🇨🇳 TensorLayerX 是一个跨平台开发框架,可以运行在各类操作系统和AI硬件上,并支持混合框架的开发。目前支持TensorFlow、MindSpore、PaddlePaddle框架常用神经网络层以及算子,PyTorch支持特性正在开发中,[支持列表](https://shimo.im/sheets/kJGCCTxXvqj99RGV/F5m5Z)
2020

21+
# News
22+
🔥 **TensorLayerX has been released, it supports TensorFlow、MindSpore and PaddlePaddle backends, and supports some PyTorch operator backends, allowing users to run the code on different hardware like Nvidia-GPU and Huawei-Ascend. Feel free to use it and make suggestions.**
2123

22-
<details>
23-
<summary>🇷🇺 TensorLayerX</summary>
24-
input text here.
25-
</details>
24+
🔥 **We need more people to join the dev team, if you are interested, please email [email protected]**
2625

27-
<details>
28-
<summary>🇸🇦 TensorLayerX</summary>
29-
input text here.
30-
</details>
3126

32-
# TensorLayerX
27+
# Design Features
3328

3429
Compare with [TensorLayer](https://github.com/tensorlayer/TensorLayer), TensorLayerX (TLX) is a brand new seperated project for platform-agnostic purpose.
3530

@@ -47,13 +42,6 @@ Comparison of TensorLayer version
4742

4843
🔥**Feel free to use TensorLayerX and make suggestions. We need more people to join the dev team, if you are interested, please email [email protected]**
4944

50-
# Examples
51-
52-
- [Basic Examples](https://github.com/tensorlayer/TensorLayerX/tree/main/examples)
53-
- [TLCV]**Coming soon!**
54-
55-
56-
5745
# Quick Start
5846

5947
- Installation
@@ -62,34 +50,49 @@ Comparison of TensorLayer version
6250
pip3 install tensorlayerx
6351
# install from Github
6452
pip3 install git+https://github.com/tensorlayer/tensorlayerx.git
65-
# install from OpenI
66-
pip3 install git+https://git.openi.org.cn/OpenI/tensorlayerX.git
67-
```
68-
If you want to use TensorFlow backend, you should install TensorFlow:
69-
```bash
70-
pip3 install tensorflow # if you want to use GPUs, CUDA and CuDNN are required.
7153
```
54+
For more installation instructions, please refer to [Installtion](https://tensorlayerx.readthedocs.io/en/latest/user/installation.html)
7255

56+
- Define a model
7357

74-
If you want to use MindSpore backend, you should install mindspore>=1.2.1
75-
```bash
76-
pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/1.2.1/MindSpore/gpu/ubuntu_x86/cuda-10.1/mindspore_gpu-1.2.1-cp37-cp37m-linux_x86_64.whl --trusted-host ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple
77-
```
58+
You can immediately use tensorlayerx to define a model, using your favourite framework in the background, like so:
59+
```python
60+
import os
61+
os.environ['TL_BACKEND'] = 'tensorflow' # change to any framework!
7862

79-
If you want to use paddlepaddle backend, you should install paddlepaddle>=2.1.1
80-
```bash
81-
python -m pip install paddlepaddle -i https://mirror.baidu.com/pypi/simple
82-
```
63+
import tensorlayerx as tlx
64+
from tensorlayerx.nn import Module
65+
from tensorlayerx.nn import Dense
66+
class CustomModel(Module):
8367

84-
If you want to use PyTorch backend, you should install PyTorch>=1.8.0
85-
```bash
86-
pip3 install torch==1.8.2+cu102 torchvision==0.9.2+cu102 torchaudio===0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
68+
def __init__(self):
69+
super(CustomModel, self).__init__()
70+
71+
self.dense1 = Dense(n_units=800, act=tlx.ReLU, in_channels=784)
72+
self.dense2 = Dense(n_units=800, act=tlx.ReLU, in_channels=800)
73+
self.dense3 = Dense(n_units=10, act=None, in_channels=800)
74+
75+
def forward(self, x, foo=False):
76+
z = self.dense1(x)
77+
z = self.dense2(z)
78+
out = self.dense3(z)
79+
if foo:
80+
out = tlx.softmax(out)
81+
return out
82+
83+
MLP = CustomModel()
84+
MLP.set_eval()
8785
```
8886

89-
- [Tutorial](https://github.com/tensorlayer/TensorLayerX/tree/main/examples/basic_tutorials)
87+
# Document
88+
TensorLayer has extensive documentation for both beginners and professionals.
89+
90+
[![English Documentation](https://img.shields.io/badge/documentation-english-blue.svg)](https://tensorlayerx.readthedocs.io/en/latest/)
9091

91-
- Discussion: [Slack](https://join.slack.com/t/tensorlayer/shared_invite/enQtODk1NTQ5NTY1OTM5LTQyMGZhN2UzZDBhM2I3YjYzZDBkNGExYzcyZDNmOGQzNmYzNjc3ZjE3MzhiMjlkMmNiMmM3Nzc4ZDY2YmNkMTY) , [QQ-Group] , [WeChat-Group]
92+
# Examples
9293

94+
- [Basic Examples](https://github.com/tensorlayer/TensorLayerX/tree/main/examples)
95+
- [TLCV]**Coming soon!**
9396

9497

9598
# Contact

tensorlayerx/backend/ops/torch_backend.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1005,7 +1005,7 @@ def __init__(self, axis=None, epsilon=1e-12):
10051005
self.epsilon = epsilon
10061006

10071007
def __call__(self, input, *args, **kwargs):
1008-
raise NotImplementedError
1008+
return torch.linalg.norm(input, ord=2, dim=self.axis)
10091009

10101010

10111011
class EmbeddingLookup(object):

tensorlayerx/model/core.py

Lines changed: 38 additions & 181 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,10 @@
22
# -*- coding: utf-8 -*-
33

44
from collections.abc import Iterable
5-
from tensorlayerx.nn.core.common import _save_weights, _load_weights, _save_standard_weights_dict, _load_standard_weights_dict
5+
from tensorlayerx.nn.core.common import _save_weights, _load_weights, \
6+
_save_standard_weights_dict, _load_standard_weights_dict
7+
from .utils import WithLoss, WithGradPD, WithGradMS, WithGradTF, TrainOneStepWithPD, \
8+
TrainOneStepWithMS, TrainOneStepWithTH, TrainOneStepWithTF, GradWrap
69
import tensorlayerx as tlx
710
from tensorlayerx.nn import Module
811
import numpy as np
@@ -11,9 +14,7 @@
1114
if tlx.BACKEND == 'tensorflow':
1215
import tensorflow as tf
1316
if tlx.BACKEND == 'mindspore':
14-
from mindspore.ops import composite
1517
from mindspore.ops import operations as P
16-
from mindspore.common import ParameterTuple
1718
if tlx.BACKEND == 'paddle':
1819
import paddle as pd
1920
if tlx.BACKEND == 'torch':
@@ -108,6 +109,12 @@ def train(self, n_epoch, train_dataset=None, test_dataset=False, print_train_bat
108109
train_weights=self.train_weights, optimizer=self.optimizer, metrics=self.metrics,
109110
print_train_batch=print_train_batch, print_freq=print_freq, test_dataset=test_dataset
110111
)
112+
elif tlx.BACKEND == 'torch':
113+
self.th_train(
114+
n_epoch=n_epoch, train_dataset=train_dataset, network=self.network, loss_fn=self.loss_fn,
115+
train_weights=self.train_weights, optimizer=self.optimizer, metrics=self.metrics,
116+
print_train_batch=print_train_batch, print_freq=print_freq, test_dataset=test_dataset
117+
)
111118

112119
def eval(self, test_dataset):
113120
self.network.set_eval()
@@ -436,10 +443,9 @@ def th_train(
436443

437444
train_loss += loss
438445
if metrics:
439-
pass
440-
# metrics.update(output, y_batch)
441-
# train_acc += metrics.result()
442-
# metrics.reset()
446+
metrics.update(output, y_batch)
447+
train_acc += metrics.result()
448+
metrics.reset()
443449
else:
444450
train_acc += (output.argmax(1) == y_batch).type(torch.float).sum().item()
445451
n_iter += 1
@@ -454,180 +460,23 @@ def th_train(
454460
print(" train loss: {}".format(train_loss / n_iter))
455461
print(" train acc: {}".format(train_acc / n_iter))
456462

457-
458-
class WithLoss(Module):
459-
"""
460-
High-Level API for Training or Testing.
461-
462-
Wraps the network with loss function. This Module accepts data and label as inputs and
463-
the computed loss will be returned.
464-
465-
Parameters
466-
----------
467-
backbone : tensorlayer model
468-
The tensorlayer network.
469-
loss_fn : function
470-
Objective function
471-
472-
Methods
473-
---------
474-
forward()
475-
Model inference.
476-
477-
Examples
478-
--------
479-
>>> import tensorlayerx as tlx
480-
>>> net = vgg16()
481-
>>> loss_fn = tlx.losses.softmax_cross_entropy_with_logits
482-
>>> net_with_loss = tlx.model.WithLoss(net, loss_fn)
483-
484-
"""
485-
486-
def __init__(self, backbone, loss_fn):
487-
super(WithLoss, self).__init__()
488-
self._backbone = backbone
489-
self._loss_fn = loss_fn
490-
491-
def forward(self, data, label):
492-
out = self._backbone(data)
493-
return self._loss_fn(out, label)
494-
495-
@property
496-
def backbone_network(self):
497-
return self._backbone
498-
499-
500-
class GradWrap(Module):
501-
""" GradWrap definition """
502-
503-
def __init__(self, network, trainable_weights):
504-
super(GradWrap, self).__init__(auto_prefix=False)
505-
self.network = network
506-
self.weights = ParameterTuple(trainable_weights)
507-
508-
def forward(self, x, label):
509-
return composite.GradOperation(get_by_list=True)(self.network, self.weights)(x, label)
510-
511-
512-
class WithGradMS(Module):
513-
"Module that returns the gradients."
514-
515-
def __init__(self, network, loss_fn=None, sens=None, optimizer=None):
516-
super(WithGradMS, self).__init__()
517-
self.network = network
518-
self.loss_fn = loss_fn
519-
self.weights = ParameterTuple(network.trainable_weights)
520-
self.grad = composite.GradOperation(get_by_list=True, sens_param=(sens is not None))
521-
self.sens = sens
522-
self.optimizer = optimizer
523-
if self.loss_fn is None:
524-
self.network_with_loss = network
525-
else:
526-
self.network_with_loss = WithLoss(self.network, self.loss_fn)
527-
self.network.set_train()
528-
529-
def forward(self, inputs, label):
530-
grads = self.grad(self.network_with_loss, self.weights)(inputs, label)
531-
return grads
532-
533-
534-
class WithGradTF(object):
535-
536-
def __init__(self, network, loss_fn=None, optimizer=None):
537-
self.network = network
538-
self.loss_fn = loss_fn
539-
self.train_weights = self.network.trainable_weights
540-
self.optimizer = optimizer
541-
if loss_fn is None:
542-
self.network_with_loss = network
543-
else:
544-
self.network_with_loss = WithLoss(self.network, self.loss_fn)
545-
self.network.set_train()
546-
547-
def __call__(self, inputs, label):
548-
with tf.GradientTape() as tape:
549-
loss = self.network_with_loss(inputs, label)
550-
grads = tape.gradient(loss, self.train_weights)
551-
return grads
552-
553-
554-
class WithGradPD(object):
555-
556-
def __init__(self, network, loss_fn=None, optimizer=None):
557-
self.network = network
558-
self.loss_fn = loss_fn
559-
self.train_weights = self.network.trainable_weights
560-
self.optimizer = optimizer
561-
if loss_fn is None:
562-
self.network_with_loss = network
563-
else:
564-
self.network_with_loss = WithLoss(self.network, self.loss_fn)
565-
self.network.set_train()
566-
567-
def __call__(self, inputs, label):
568-
loss = self.network_with_loss(inputs, label)
569-
grads = self.optimizer.gradient(loss, self.train_weights)
570-
return grads
571-
572-
573-
class TrainOneStepWithTF(object):
574-
575-
def __init__(self, net_with_loss, optimizer, train_weights):
576-
self.net_with_loss = net_with_loss
577-
self.optimzer = optimizer
578-
self.train_weights = train_weights
579-
580-
def __call__(self, data, label):
581-
with tf.GradientTape() as tape:
582-
loss = self.net_with_loss(data, label)
583-
grad = tape.gradient(loss, self.train_weights)
584-
self.optimzer.apply_gradients(zip(grad, self.train_weights))
585-
return loss
586-
587-
588-
class TrainOneStepWithMS(object):
589-
590-
def __init__(self, net_with_loss, optimizer, train_weights):
591-
self.net_with_loss = net_with_loss
592-
self.optimizer = optimizer
593-
self.train_weights = train_weights
594-
self.net_with_loss = net_with_loss
595-
self.train_network = GradWrap(net_with_loss, train_weights)
596-
597-
def __call__(self, data, label):
598-
loss = self.net_with_loss(data, label)
599-
grads = self.train_network(data, label)
600-
self.optimizer.apply_gradients(zip(grads, self.train_weights))
601-
loss = loss.asnumpy()
602-
return loss
603-
604-
605-
class TrainOneStepWithPD(object):
606-
607-
def __init__(self, net_with_loss, optimizer, train_weights):
608-
self.net_with_loss = net_with_loss
609-
self.optimizer = optimizer
610-
self.train_weights = train_weights
611-
612-
def __call__(self, data, label):
613-
loss = self.net_with_loss(data, label)
614-
grads = self.optimizer.gradient(loss, self.train_weights)
615-
self.optimizer.apply_gradients(zip(grads, self.train_weights))
616-
return loss.numpy()
617-
618-
619-
class TrainOneStepWithTH(object):
620-
621-
def __init__(self, net_with_loss, optimizer, train_weights):
622-
self.net_with_loss = net_with_loss
623-
self.optimizer = optimizer
624-
self.train_weights = train_weights
625-
626-
def __call__(self, data, label):
627-
loss = self.net_with_loss(data, label)
628-
grads = self.optimizer.gradient(loss, self.train_weights)
629-
self.optimizer.apply_gradients(zip(grads, self.train_weights))
630-
return loss
463+
if test_dataset:
464+
# use training and evaluation sets to evaluate the model every print_freq epoch
465+
if epoch + 1 == 1 or (epoch + 1) % print_freq == 0:
466+
network.set_eval()
467+
val_loss, val_acc, n_iter = 0, 0, 0
468+
for X_batch, y_batch in test_dataset:
469+
_logits = network(X_batch) # is_train=False, disable dropout
470+
val_loss += loss_fn(_logits, y_batch, name='eval_loss')
471+
if metrics:
472+
metrics.update(_logits, y_batch)
473+
val_acc += metrics.result()
474+
metrics.reset()
475+
else:
476+
val_acc += (_logits.argmax(1) == y_batch).type(torch.float).sum().item()
477+
n_iter += 1
478+
print(" val loss: {}".format(val_loss / n_iter))
479+
print(" val acc: {}".format(val_acc / n_iter))
631480

632481

633482
class WithGrad(object):
@@ -713,3 +562,11 @@ def __init__(self, net_with_loss, optimizer, train_weights):
713562
def __call__(self, data, label):
714563
loss = self.net_with_train(data, label)
715564
return loss
565+
566+
567+
class TrainOneStepWithGradientClipping(object):
568+
def __init__(self):
569+
pass
570+
571+
def __call__(self, data, label):
572+
pass

0 commit comments

Comments
 (0)