Skip to content

Linear decoder #56

@agosztolai

Description

@agosztolai

Hello,

I have been playing around with your repository for a few days and have noticed that in the NNDMD example, you are using a non-linear encoder and a linear decoder, e.g., [here](https://pykoopman.readthedocs.io/en/master/tutorial_koopman_nndmd_examples.html).

Some part of the _nndmd.py coder seems to explicitly depend on a linear decoder - it actually crashes if I choose a non-linear decoder because the eigenvectors are mapped back using an 'effective linear transformation'.

Is there a specific reason why this is a good choice? It seems counterintuitive that a linear transformation can invert a non-linear one. For example, in the DeepKoopman code by Lusch et al., a non-linear decoder was used, it seemed to me. Can you point me to a theoretical justification (e.g., a paper) for using a linear decoder?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions