Skip to content

Optimizing Problem #2

@xleonplayz

Description

@xleonplayz

At the moment the Adamoptimizer is used. However, the loss is always 0.0 which shouldn't be the case. All training phases should be permuted per step so that it has an influence

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingdocumentationImprovements or additions to documentation

    Type

    No type

    Projects

    Status

    No status

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions