Skip to content

The error of bandgap does not decrease as the iteration proceeds #91

@yycx1111

Description

@yycx1111
  1. The problem of convergence rate decreasing to 0 is mitigated by adjusting the orbital_factor to become smaller.
  2. However, as the iteration proceeds, the error of bandgap does not continue to decrease.
  3. Or the error of bandgap decreases and then increases as the iteration progresses.
    For example, with this parameter setting:force_factor:1 stress_factor:0.1 orbital:0.01, the log.data is the fallowings.
    iter.init
    Image

iter.00
Image

iter,01
Image

iter.02
Image

iter.03
Image

iter.04
Image

Questions:

  1. Any suggestions for parameter tuning for this phenomenon? Before, by reducing the training epoch, we can let the training have a few more rounds of iterations, but after that, it will still fail to converge in a certain round of scf calculation, and the error will only decrease slightly.
  2. There is a lot of confusion about the convergence of the scf calculation with training iterations and the problem of bandgap error not decreasing, and I would like to get answers and suggestions.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions