Hello, I found a performance issue in the difinition of _inference, examples/memn2n_dialogue/memn2n_dialogue.py, tf.nn.embedding_lookup will be calculated repeately during the program execution, resulting in reduced efficiency. So I think it should be created before the loop.
The same issue exist in tf.reduce_sum in line 187 and 200.
Looking forward to your reply. Btw, I am very glad to create a PR to fix it if you are too busy.