Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ExpVarCell #33

Open
cortner opened this issue Oct 24, 2016 · 1 comment
Open

ExpVarCell #33

cortner opened this issue Oct 24, 2016 · 1 comment
Assignees

Comments

@cortner
Copy link
Member

cortner commented Oct 24, 2016

This issue is to discuss anything related to the following variable cell implementation:

We transform F = exp(U), then the following happens:

  • F is a rotation iff. U = - U’ (e.g. one can get rid of rotational motion by just imposing the U = U’)
  • F is diagonal iff U is diagonal (for isotropic constraint)
  • det(F) = exp( trace(U) ) (so volume constraint becomes linear, trace(U) = log(volume); so trivial to impose in optimisation)
  • maybe the best one: F is non-degenerate (det(F) \neq 0) for ALL U (so no increment can degenerate the cell)

Unfortunately, in initial tests this turned out to be not robust.

@cortner
Copy link
Member Author

cortner commented Oct 24, 2016

I am narrowing down why I couldn’t get this idea (which I insist is the right way) to work. Basically, when the cell is too far from equilibrium then moderate looking steps are exponentiated and then take the cell shape/size outside any meaningful regime.

So I now tested it with a Si cell and StillingerWeber potential where the initial cell size is not too bad. In that case I get the following: (this is Newton-type behaviour! it basically suggests that Id is already the perfect preconditioner - I don’t understand this yet)

[1] Test optimisation with ExpVariableCell

vecnorm(virial(at),Inf) = 0.1063981186901441
Iter     Function value   Gradient norm
    0    -6.935997e+01     1.063981e-01
    1    -6.936000e+01     3.144277e-03
    2    -6.936000e+01     4.909681e-08
  • Objective Function Calls: 8
  • Gradient Calls: 6

[2] Test for comparison with the standard VariableCell

vecnorm(virial(at),Inf) = 0.1063981186901441
Iter     Function value   Gradient norm
    0    -6.935997e+01     9.797248e-03
    1    -6.935999e+01     4.261614e-03
    2    -6.936000e+01     3.449840e-03
    3    -6.936000e+01     5.117687e-05
    4    -6.936000e+01     1.636466e-05
    5    -6.936000e+01     3.991784e-06
    6    -6.936000e+01     5.387925e-07
  • Objective Function Calls: 23
  • Gradient Calls: 17

Because of the robustness issue, it is still not entirely practical, but I think it can be fixed just by adding a step-length constraint. In Julia I would need to write a modified Optim.optimize which should be ok but will take a little bit of time.

How much work would it be to implement a new VariableCell thing in ASE?

JAMES (copied from email): If we can get these kinds of speed up to be robust then I agree it would be worth implementing in ASE, and don’t imagine it would be a huge amount of work. We would again have to modify the optimiser to add the max step length constraint, but we could do this only for our LBFGS and (still to be written) CG.

@cortner cortner self-assigned this Nov 5, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant