5
5
[ ![ PyPI - Python Version] ( https://img.shields.io/pypi/pyversions/tensorflow-recommenders-addons )] ( https://pypi.org/project/tensorflow-recommenders-addons/ )
6
6
[ ![ Documentation] ( https://img.shields.io/badge/api-reference-blue.svg )] ( docs/api_docs/ )
7
7
8
-
9
8
TensorFlow Recommenders Addons(TFRA) are a collection of projects related to large-scale recommendation systems
10
9
built upon TensorFlow by introducing the ** Dynamic Embedding Technology** to TensorFlow
11
- that make TensorFlow more suitable for trainning models of ** Search, Recommendations and Advertising** .
12
- These projects are contributed and maintained by the community. Those contributions will be complementary to
13
- TensorFlow Core and TensorFlow Recommenders etc.
14
-
15
- ## Scope
16
-
17
- See approved TensorFlow RFC #[ 313] ( https://github.com/tensorflow/community/pull/313 ) .
18
-
19
- TensorFlow has open-sourced [ TensorFlow Recommenders] ( https://blog.tensorflow.org/2020/09/introducing-tensorflow-recommenders.html )
20
- ([ github.com/tensorflow/recommenders] ( http://github.com/tensorflow/recommenders ) ),
21
- an open-source TensorFlow package that makes building, evaluating, and serving
22
- sophisticated recommender models easy.
23
-
24
- Further, this repo is maintained by TF SIG Recommenders
25
- (
[ [email protected] ] ( https://groups.google.com/a/tensorflow.org/g/recommenders ) )
26
- for community contributions. SIG Recommenders can contributes more addons as complementary
27
- to TensorFlow Recommenders, or any helpful libraries related to recommendation systems using
28
- TensorFlow. The contribution areas can be broad and don't limit to the topic listed below:
29
-
30
- * Training with scale: How to train from super large sparse features? How to
31
- deal with dynamic embedding and key-value parameters?
32
- * Serving with efficiency: Given recommendation models are usually pretty
33
- large, how to serve super large models easily, and how to serve efficiently?
34
- * Modeling with SoTA techniques: online learning, multi-target learning, deal
35
- with quality inconsistent among online and offline, model understandability,
36
- GNN etc.
37
- * End-to-end pipeline: how to train continuously, e.g. integrate with platforms
38
- like TFX.
39
- * Vendor specific extensions and platform integrations: for example, runtime
40
- specific frameworks (e.g. NVIDIA Merlin, …), and integrations with Cloud services
41
- (e.g. GCP, AWS, Azure…)
42
-
43
- ## RFCs
44
- * [ RFC: Dynamic Embedding] ( rfcs/20200424-sparse-domain-isolation.md )
45
- * [ RFC: Embedding Variable] ( https://docs.google.com/document/d/1odez6-69YH-eFcp8rKndDHTNGxZgdFFRJufsW94_gl4 )
10
+ that makes TensorFlow more suitable for trainning models of ** Search, Recommendations and Advertising** and
11
+ makes building, evaluating, and serving sophisticated recommenders models easy.
12
+ See approved TensorFlow RFC #[ 313] ( https://github.com/tensorflow/community/pull/313 ) .
13
+ Those contributions will be complementary to TensorFlow Core and TensorFlow Recommenders etc.
14
+
15
+ ## Main Features
16
+
17
+ - Make key-value data structure (dynamic embedding) trainable in TensorFlow
18
+ - Get better recommendation effect compared to static embedding mechanism with no hash conflicts
19
+ - Compatible with all native TensorFlow optimizers and initializers
20
+ - Compatible with native TensorFlow CheckPoint and SavedModel format
21
+ - Fully support train and inference recommeneders models on GPUs
22
+ - Support [ TF serving] ( https://github.com/tensorflow/serving ) and [ Triton Inference Server] ( https://github.com/triton-inference-server/server ) as inference framework
23
+ - Support variant Key-Value implements as dynamic embedding storage and easy to extend
24
+ - [ cuckoohash_map] ( https://github.com/efficient/libcuckoo ) (from Efficient Computing at Carnegie Mellon, on CPU)
25
+ - [ nvhash] ( https://github.com/rapidsai/cudf ) (from NVIDIA, on GPU)
26
+ - [ Redis] ( https://github.com/redis/redis )
27
+ - Support half synchronous training based on Horovod
28
+ - Synchronous training for dense weights
29
+ - Asynchronous training for sparse weights
46
30
47
31
## Subpackages
48
32
49
- * [ tfra.dynamic_embedding] ( docs/api_docs/tfra/dynamic_embedding.md )
50
- * [ tfra.embedding_variable] ( https://github.com/tensorflow/recommenders-addons/blob/master/docs/tutorials/embedding_variable_tutorial.ipynb )
33
+ * [ tfra.dynamic_embedding] ( docs/api_docs/tfra/dynamic_embedding.md ) , [ RFC] ( rfcs/20200424-sparse-domain-isolation.md )
34
+ * [ tfra.embedding_variable] ( https://github.com/tensorflow/recommenders-addons/blob/master/docs/tutorials/embedding_variable_tutorial.ipynb ) , [ RFC] ( https://docs.google.com/document/d/1odez6-69YH-eFcp8rKndDHTNGxZgdFFRJufsW94_gl4 )
35
+
36
+ ## Contributors
37
+
38
+ TensorFlow Recommenders-Addons depends on public contributions, bug fixes, and documentation.
39
+ This project exists thanks to all the people and organizations who contribute. [[ Contribute] ( CONTRIBUTING.md )]
40
+
41
+ <a href =" https://github.com/tensorflow/recommenders-addons/graphs/contributors " >
42
+ <img src =" https://contrib.rocks/image?repo=tensorflow/recommenders-addons " />
43
+ </a >
51
44
52
- ## Tutorials
53
- See [ ` docs/tutorials/ ` ] ( docs/tutorials/ ) for end-to-end examples of each subpackages.
54
45
55
- ## Maintainership
46
+ \
47
+ <a href =" https://github.com/tencent " >
48
+ <kbd > <img src =" ./assets/tencent.png " height =" 70 " /> </kbd >
49
+ </a ><a href =" https://github.com/alibaba " >
50
+ <kbd > <img src =" ./assets/alibaba.jpg " height =" 70 " /> </kbd >
51
+ </a ><a href =" https://vip.com/ " >
52
+ <kbd > <img src =" ./assets/vips.jpg " height =" 70 " /> </kbd >
53
+ </a ><a href =" https://www.zhipin.com// " >
54
+ <kbd > <img src =" ./assets/boss.svg " height =" 70 " /> </kbd >
55
+ </a >
56
56
57
- We adopt proxy maintainership as in [ TensorFlow Recommenders-Addons] ( https://github.com/tensorflow/recommenders-addons ) :
57
+ \
58
+ A special thanks to [ NVIDIA Merlin Team] ( https://github.com/NVIDIA-Merlin ) and NVIDIA China DevTech Team,
59
+ who have provided GPU acceleration technology support and code contribution.
58
60
59
- * Projects and subpackages are compartmentalized and each is maintained by those
60
- with expertise and vested interest in that component.*
61
+ <a href =" https://github.com/NVIDIA-Merlin " >
62
+ <kbd > <img src =" ./assets/merilin.png " height =" 70 " /> </kbd >
63
+ </a >
61
64
62
- * Subpackage maintainership will only be granted after substantial contribution
63
- has been made in order to limit the number of users with write permission.
64
- Contributions can come in the form of issue closings, bug fixes, documentation,
65
- new code, or optimizing existing code. Submodule maintainership can be granted
66
- with a lower barrier for entry as this will not include write permissions to
67
- the repo.*
65
+ ## Tutorials & Demos
66
+ See [ tutorials] ( docs/tutorials/ ) and [ demo] ( demo/ ) for end-to-end examples of each subpackages.
68
67
69
68
## Installation
70
69
#### Stable Builds
@@ -101,14 +100,16 @@ is compiled differently. A typical example of this would be `conda`-installed Te
101
100
102
101
103
102
#### Compatibility Matrix
104
- * GPU is supported from version ` 0.2.0 ` *
103
+ * GPU is supported by version ` 0.2.0 ` and later. *
105
104
106
- | TFRA | TensorFlow | Compiler | CUDA | CUDNN | Compute Capability |
107
- | :----------------------- | :---- | :---------| :------------ | :---- | :------------ |
108
- | 0.3.0 | 2.5.1 | GCC 7.3.1 | 11.2 | 8.1 | 6.0, 6.1, 7.0, 7.5, 8.0, 8.6 |
109
- | 0.2.0 | 2.4.1 | GCC 7.3.1 | 11.0 | 8.0 | 6.0, 6.1, 7.0, 7.5, 8.0 |
110
- | 0.2.0 | 1.15.2 | GCC 7.3.1 | 10.0 | 7.6 | 6.0, 6.1, 7.0, 7.5 |
111
- | 0.1.0 | 2.4.1 | GCC 7.3.1 | - | - | - |
105
+ | TFRA | TensorFlow | Compiler | CUDA | CUDNN | Compute Capability | CPU |
106
+ | :------| :-------------| :-----------| :-----| :------| :-----------------------------| :--------------|
107
+ | 0.4.0 | 2.5.1 | GCC 7.3.1 | 11.2 | 8.1 | 6.0, 6.1, 7.0, 7.5, 8.0, 8.6 | x86 |
108
+ | 0.4.0 | 2.5.0 | Xcode 13.1 | - | - | - | Apple M1 |
109
+ | 0.3.1 | 2.5.1 | GCC 7.3.1 | 11.2 | 8.1 | 6.0, 6.1, 7.0, 7.5, 8.0, 8.6 | x86 |
110
+ | 0.2.0 | 2.4.1 | GCC 7.3.1 | 11.0 | 8.0 | 6.0, 6.1, 7.0, 7.5, 8.0 | x86 |
111
+ | 0.2.0 | 1.15.2 | GCC 7.3.1 | 10.0 | 7.6 | 6.0, 6.1, 7.0, 7.5 | x86 |
112
+ | 0.1.0 | 2.4.1 | GCC 7.3.1 | - | - | - | x86 |
112
113
113
114
Check [ nvidia-support-matrix] ( https://docs.nvidia.com/deeplearning/cudnn/support-matrix/index.html ) for more details.
114
115
@@ -120,7 +121,7 @@ Check [nvidia-support-matrix](https://docs.nvidia.com/deeplearning/cudnn/support
120
121
PY_VERSION=" 3.7" \
121
122
TF_VERSION=" 1.15.2" \
122
123
TF_NEED_CUDA=1 \
123
- sh .github/workflows/make_wheel_Linux .sh
124
+ sh .github/workflows/make_wheel_Linux_x86 .sh
124
125
125
126
# .whl file will be created in ./wheelhouse/
126
127
```
@@ -136,7 +137,7 @@ Please install a TensorFlow on your compiling machine, The compiler needs to kno
136
137
its headers according to the installed TensorFlow.
137
138
138
139
```
139
- export TF_VERSION=[ "2.5.1", "2.4.1 ", "1.15.2"]
140
+ export TF_VERSION="2.5.1" # "2.7.0 ", "2.5.1" are well tested.
140
141
pip install tensorflow[-gpu]==$TF_VERSION
141
142
142
143
git clone https://github.com/tensorflow/recommenders-addons.git
@@ -203,12 +204,13 @@ de = tfra.dynamic_embedding.get_variable("VariableOnGpu",
203
204
sess_config.gpu_options.allow_growth = True
204
205
```
205
206
206
- ### Compatibility with Tensorflow Serving
207
+ ## Inference with TensorFlow Serving
207
208
208
209
#### Compatibility Matrix
209
- | TFRA | TensorFlow | Serving | Compiler | CUDA | CUDNN | Compute Capability |
210
- | :----- | :---- | :---- | :---------| :------------ | :---- | :------------ |
211
- | 0.3.0 | 2.5.1 | 2.5.2 | GCC 7.3.1 | 11.2| 8.1 | 6.0, 6.1, 7.0, 7.5, 8.0, 8.6 |
210
+ | TFRA | TensorFlow | Serving | Compiler | CUDA | CUDNN | Compute Capability |
211
+ | :------| :---- | :---- | :---------| :------------ | :---- | :------------ |
212
+ | 0.4.0 | 2.5.1 | 2.5.2 | GCC 7.3.1 | 11.2| 8.1 | 6.0, 6.1, 7.0, 7.5, 8.0, 8.6 |
213
+ | 0.3.1 | 2.5.1 | 2.5.2 | GCC 7.3.1 | 11.2| 8.1 | 6.0, 6.1, 7.0, 7.5, 8.0, 8.6 |
212
214
| 0.2.0 | 2.4.1 | 2.4.0 | GCC 7.3.1 | 11.0 | 8.0 | 6.0, 6.1, 7.0, 7.5, 8.0 |
213
215
| 0.2.0 | 1.15.2 | 1.15.0 | GCC 7.3.1 | 10.0 | 7.6 | 6.0, 6.1, 7.0, 7.5 |
214
216
| 0.1.0 | 2.4.1 | 2.4.0 | GCC 7.3.1 | - | - | - |
@@ -231,14 +233,9 @@ SUPPORTED_TENSORFLOW_OPS = if_v2([]) + if_not_v2([
231
233
"//tensorflow_recommenders_addons/dynamic_embedding/core:_math_ops.so",
232
234
]
233
235
```
236
+ ** NOTICE**
237
+ - Distributed inference is only supported when using Redis as Key-Value storage.
234
238
235
- ## Contributing
236
-
237
- TensorFlow Recommenders-Addons is a community-led open source project. As such,
238
- the project depends on public contributions, bug fixes, and documentation. This
239
- project adheres to TensorFlow's Code of Conduct.
240
-
241
- Please follow up the [ contributing guide] ( CONTRIBUTING.md ) for more details.
242
239
243
240
## Community
244
241
@@ -247,6 +244,7 @@ Please follow up the [contributing guide](CONTRIBUTING.md) for more details.
247
244
248
245
## Acknowledgment
249
246
We are very grateful to the maintainers of [ tensorflow/addons] ( https://github.com/tensorflow/addons ) for borrowing a lot of code from [ tensorflow/addons] ( https://github.com/tensorflow/addons ) to build our workflow and documentation system.
247
+ We also want to extend a thank you to the Google team members who have helped with CI setup and reviews!
250
248
251
249
## Licence
252
250
Apache License 2.0
0 commit comments