Skip to content

Commit dd3a167

Browse files
committed
Merge remote-tracking branch 'upstream/master'
2 parents eb1fe22 + c794cb5 commit dd3a167

File tree

14 files changed

+1482
-3
lines changed

14 files changed

+1482
-3
lines changed

RELEASE.md

Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,39 @@
1+
# Release 1.3.0
2+
3+
## Major Features and Improvements
4+
5+
* Added locality-sensitive hashing (LSH) support to the graph builder tool.
6+
This allows the graph builder to scale up to larger input datasets. As part
7+
of this change, the new `nsl.configs.GraphBuilderConfig` class was
8+
introduced, as well as a new `nsl.tools.build_graph_from_config` function.
9+
The new parameters for controlling the LSH algorithm are named `lsh_rounds`
10+
and `lsh_splits`.
11+
12+
## Bug Fixes and Other Changes
13+
14+
* Fixed a bug in `nsl.tools.read_tsv_graph` that was incrementing the edge
15+
count too often.
16+
* Changed `nsl.tools.add_edge` to return a boolean result indicating if a new
17+
edge was added or not; previously, this function was not returning any
18+
value.
19+
* Removed Python 2 unit tests.
20+
* Fixed a bug in `nsl.estimator.add_adversarial_regularization` and
21+
`nsl.estimator.add_graph_regularization` so that the `UPDATE_OPS` can be
22+
triggered correctly.
23+
* Updated graph-NSL tutorials not to parse neighbor features during
24+
evaluation.
25+
* Added scaled graph and adversarial loss values as scalars to the summary in
26+
`nsl.estimator.add_graph_regularization` and
27+
`nsl.estimator.add_adversarial_regularization` respectively.
28+
* Updated graph and adversarial regularization loss metrics in
29+
`nsl.keras.GraphRegularization` and `nsl.keras.AdversarialRegularization`
30+
respectively, to include scaled values for consistency with their respective
31+
loss term contributions.
32+
33+
## Thanks to our Contributors
34+
35+
This release contains contributions from many people at Google.
36+
137
# Release 1.2.0
238

339
## Major Features and Improvements

g3doc/tutorials/graph_keras_lstm_imdb.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1381,7 +1381,7 @@
13811381
"acc = graph_reg_history_dict['accuracy']\n",
13821382
"val_acc = graph_reg_history_dict['val_accuracy']\n",
13831383
"loss = graph_reg_history_dict['loss']\n",
1384-
"graph_loss = graph_reg_history_dict['graph_loss']\n",
1384+
"graph_loss = graph_reg_history_dict['scaled_graph_loss']\n",
13851385
"val_loss = graph_reg_history_dict['val_loss']\n",
13861386
"\n",
13871387
"epochs = range(1, len(acc) + 1)\n",

neural_structured_learning/keras/layers/layers_test.py

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,14 +37,17 @@ def _make_functional_regularized_model(distance_config):
3737
def _make_unregularized_model(inputs, num_classes):
3838
"""Makes standard 1 layer MLP with logistic regression."""
3939
x = tf.keras.layers.Dense(16, activation='relu')(inputs)
40-
return tf.keras.Model(inputs, outputs=tf.keras.layers.Dense(num_classes)(x))
40+
model = tf.keras.Model(inputs, tf.keras.layers.Dense(num_classes)(x))
41+
return model
4142

4243
# Each example has 4 features and 2 neighbors, each with an edge weight.
4344
inputs = (tf.keras.Input(shape=(4,), dtype=tf.float32, name='features'),
4445
tf.keras.Input(shape=(2, 4), dtype=tf.float32, name='neighbors'),
4546
tf.keras.Input(
4647
shape=(2, 1), dtype=tf.float32, name='neighbor_weights'))
4748
features, neighbors, neighbor_weights = inputs
49+
neighbors = tf.reshape(neighbors, (-1,) + tuple(features.shape[1:]))
50+
neighbor_weights = tf.reshape(neighbor_weights, [-1, 1])
4851
unregularized_model = _make_unregularized_model(features, 3)
4952
logits = unregularized_model(features)
5053
model = tf.keras.Model(inputs=inputs, outputs=logits)

neural_structured_learning/version.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515

1616
# We follow Semantic Versioning (https://semver.org/).
1717
_MAJOR_VERSION = '1'
18-
_MINOR_VERSION = '2'
18+
_MINOR_VERSION = '3'
1919
_PATCH_VERSION = '0'
2020

2121
_VERSION_SUFFIX = ''

workshops/kdd_2020/README.md

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,8 @@ We will begin the tutorial with an overview of the Neural Structured Learning
3636
framework and motivate the advantages of training neural networks with
3737
structured signals.
3838

39+
[[Slides](slides/Introduction.pdf)]
40+
3941
### Data preprocessing in NSL
4042

4143
This part of the tutorial will include a presentation discussing:
@@ -44,6 +46,8 @@ This part of the tutorial will include a presentation discussing:
4446
- Augmenting training data for graph-based regularization in NSL
4547
- Related tools in the NSL framework
4648

49+
[[Slides](slides/Data_Preprocessing.pdf)]
50+
4751
### Graph regularization using natural graphs (Lab 1)
4852

4953
Graph regularization [2] forces neural networks to learn similar
@@ -53,6 +57,9 @@ inherent relationship between each other. We will demonstrate via a practical
5357
tutorial, the use of natural graphs for graph regularization to classify the
5458
veracity of public message posts.
5559

60+
[[Slides](slides/Natural_Graphs.pdf)]
61+
[[Colab tutorial](https://colab.research.google.com/github/tensorflow/neural-structured-learning/blob/master/workshops/kdd_2020/graph_regularization_pheme_natural_graph.ipynb)]
62+
5663
### Graph regularization using synthesized graphs (Lab 2)
5764

5865
Input data may not always be represented as a graph. However, one can infer
@@ -62,6 +69,9 @@ for text classification using a practical tutorial. While graphs can be built in
6269
many ways, we will make use of text embeddings in this tutorial to build a
6370
graph.
6471

72+
[[Slides](slides/Synthesized_Graphs.pdf)]
73+
[[Colab tutorial](https://colab.research.google.com/github/tensorflow/neural-structured-learning/blob/master/g3doc/tutorials/graph_keras_lstm_imdb.ipynb)]
74+
6575
### Adversarial regularization (Lab 3)
6676

6777
Adversarial learning has been shown to be effective in improving the accuracy of
@@ -70,11 +80,16 @@ adversarial learning techniques [3,4] like *Fast Gradient Sign Method* (FGSM)
7080
and *Projected Gradient Descent* (PGD) for image classification using a
7181
practical tutorial.
7282

83+
[[Slides](slides/Adversarial_Learning.pdf)]
84+
[[Colab tutorial](https://colab.research.google.com/github/tensorflow/neural-structured-learning/blob/master/workshops/kdd_2020/adversarial_regularization_mnist.ipynb)]
85+
7386
### NSL in TensorFlow Extended (TFX)
7487

7588
- Presentation on how Neural Structured Learning can be integrated with
7689
[TFX](https://www.tensorflow.org/tfx) pipelines.
7790

91+
[[Slides](slides/NSL_in_TFX.pdf)]
92+
7893
### Research and Future Directions
7994

8095
- Presentation discussing:
@@ -84,12 +99,16 @@ practical tutorial.
8499
- Prototype showing how NSL can be used with the
85100
[Graph Nets](https://github.com/deepmind/graph_nets) [9] library.
86101

102+
[[Slides](slides/Research_and_Future_Directions.pdf)]
103+
87104
### Conclusion
88105

89106
We will conclude our tutorial with a summary of the entire session, provide
90107
links to various NSL resources, and share a link to a brief survey to get
91108
feedback on the NSL framework and the hands-on tutorial.
92109

110+
[[Slides](slides/Summary.pdf)]
111+
93112
## References
94113

95114
1. https://www.tensorflow.org/neural_structured_learning

0 commit comments

Comments
 (0)