Skip to content

Poor generalization on ITOP test set #4

@danasko

Description

@danasko

Im trying to implement your approach in keras, and while the results on ITOP dataset are relatively fair on validation set (obtained by splitting the train set), I can get the average error on test set lower than ~0.08m ... Do you think this can be caused by high variability between train and test set? Or does it seem more like the regularization is poor? I'm using the same dropout and regularization as stated in the paper, including the same alpha parameter, so I'm running out of ideas.
Any help would be appreciated, thanks.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions