Skip to content
Discussion options

You must be logged in to vote

I see.
This seems strange to me but this readout is entirely based on the Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks by Lee et al. whose original implementation also uses a single linear layer followed by ReLU.
Thanks for the prompt answer.

Replies: 3 comments 3 replies

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
1 reply
@rusty1s
Comment options

Answer selected by ldv1
Comment options

You must be logged in to vote
2 replies
@rusty1s
Comment options

@ldv1
Comment options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants