Mathijs Deen
mathijsdeen.com
Mathijs Deen
@mathijsdeen.com
Statistician in psychiatry in The Netherlands.
https://mathijsdeen.com
Yes it is! I read about your co-authors’ reluctance to provide you with some of the good stuff and it triggered a craving in me as well! 😄
March 13, 2025 at 10:00 PM
When you run into convergence problems, try switching to other optimization routines using the control argument of glmmTMB or (g)lmer. Using BFGS often does the trick for me.
February 26, 2025 at 9:02 AM
Wrt the convergence errors: the default optimizer for glmmTMB is nlminb, which uses a (somewhat vague) quasi-Newton optimization method - it's the same default as nlme. lme4 uses bobyqa optimization by default. I think convergence issues may arise using either algorithm, depending on the use case.
February 26, 2025 at 9:02 AM
Adding random slopes and RE covariances gives you more parameters to estimate, though this should not really lead to power issues. More important: it generally gives you less biased SEs, so better statistical inference. Its reduction of type-I/II errors might be considered to be related to power.
February 26, 2025 at 9:02 AM