You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a non-standard use-case, but I think it could/should be useful for many users.
I want to use CEBRA to learn embeddings of my interpretable behavioral features using neural activity as labels/auxiliary variables across multiple sessions and subjects, although the number and "identities" of sampled neurons per session vary across sessions. So the standard CEBRA solution (as I understand it) would be to flip the direction and learn embeddings of neural activity using behavioral variables as labels. However, I think the other way around, which isn't currently supported, is equally or maybe even more interesting.
Why? Because studying neural tuning is important - it determines other important properties (e.g. representational geometry and discriminability/decodability), whereas the opposite is not true (see Kriegeskorte and Wei, 2024). Using the behavioral -> neural direction makes studying neural tuning more straightforward, I think.
So, I'm trying to figure out how to work around the issue, or maybe even extend/complement CEBRA to accommodate this. One option would be to have a behavioral encoder model that's shared across sessions, and a session-specific "projection head" that projects the intermediate embedding into the final (session-specific) embeddings, and then compute the contrastive loss using the neural labels. I guess it'd require sampling both positive and negative pairs within-session, though I'm not entirely sure about it. Anyway, this seems similar to SimCLR and other common (by now standard?) methods. To predict neural activity on a new session, we could then just tune the projection head to map from the shared behavioral embedding to the new neural space. We could then use feature attribution methods to study neural tuning. And maybe we could study the shared behavioral embedding as a "neurally-relevant embedding of behavior", which could complement the embeddings currently obtainable from CEBRA (though this part may be less promising than the neural encoding/tuning part).
A simpler approach could similarly use multi-task heads but with a more standard supervised learning loss, but I think it'd be useful to utilize self-supervised contrastive learning, and interesting to compare the two options.
I'd highly appreciate your feedback and guidance on this idea and if and how to try to implement it. If this sounds interesting/potentially useful, I'd be happy to contribute. Also happy to provide more details about my data/use-case if needed.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I have a non-standard use-case, but I think it could/should be useful for many users.
I want to use CEBRA to learn embeddings of my interpretable behavioral features using neural activity as labels/auxiliary variables across multiple sessions and subjects, although the number and "identities" of sampled neurons per session vary across sessions. So the standard CEBRA solution (as I understand it) would be to flip the direction and learn embeddings of neural activity using behavioral variables as labels. However, I think the other way around, which isn't currently supported, is equally or maybe even more interesting.
Why? Because studying neural tuning is important - it determines other important properties (e.g. representational geometry and discriminability/decodability), whereas the opposite is not true (see Kriegeskorte and Wei, 2024). Using the behavioral -> neural direction makes studying neural tuning more straightforward, I think.
So, I'm trying to figure out how to work around the issue, or maybe even extend/complement CEBRA to accommodate this. One option would be to have a behavioral encoder model that's shared across sessions, and a session-specific "projection head" that projects the intermediate embedding into the final (session-specific) embeddings, and then compute the contrastive loss using the neural labels. I guess it'd require sampling both positive and negative pairs within-session, though I'm not entirely sure about it. Anyway, this seems similar to SimCLR and other common (by now standard?) methods. To predict neural activity on a new session, we could then just tune the projection head to map from the shared behavioral embedding to the new neural space. We could then use feature attribution methods to study neural tuning. And maybe we could study the shared behavioral embedding as a "neurally-relevant embedding of behavior", which could complement the embeddings currently obtainable from CEBRA (though this part may be less promising than the neural encoding/tuning part).
A simpler approach could similarly use multi-task heads but with a more standard supervised learning loss, but I think it'd be useful to utilize self-supervised contrastive learning, and interesting to compare the two options.
I'd highly appreciate your feedback and guidance on this idea and if and how to try to implement it. If this sounds interesting/potentially useful, I'd be happy to contribute. Also happy to provide more details about my data/use-case if needed.
Thanks!
(PS I wrote a related post here)
Beta Was this translation helpful? Give feedback.
All reactions