Skip to content

Implement the unified attention interpretation API for similar models #4

@oserikov

Description

@oserikov

duration: scalable, can be both 170 and 340 hours
difficulty: challenging
mentor: @oserikov , TBD
requirements:

  1. pytorch
  2. sklearn
  3. python engineering code, OOP, etc.
  4. experience with Transformer Language models

useful links:

  • NeuroX codebase
  • Bert re-invents the classical NLP pipeline
  • Captum

Idea Description:

While HuggingFace quickly became the standard way to publish language models, several architectural trade-offs have been made to support the quick growth of the models' zoo. This resulted in several theoretically similar models being implemented by different teams, thus e.g. several alternative implementations of self-attentive transformers arose. While refactoring the whole zoo of models seems to be far from the accessible task, the interpretability community is forced to provide unification wrappers for handling such dissimilarities in similar models. The task is to provide a reasonable trade-off with the refactoring of the crucial models and providing the unified wrappers, and thus bring the unified interpretability API to the crucial HuggingFace models.

We could see this task from two prospects. First, one could unify the interpretability API of the sibling models such as BERT and RoBERTa . Second, one could think about bringing the unified interface to interpret and compare encoder models with e.g. encoder-decoder ones, allowing to study the similarities and distinctiveness in their behavior.

Coding Challenge

WIP

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions