Benjamin Kunc
banner
benjaminkunc.bsky.social
Benjamin Kunc
@benjaminkunc.bsky.social
PhD student at the CCP, KU Leuven. Studying Measurement in Experience Sampling Methods | Coorganizing ReproducibiliTea Leuven sessions | Enjoying Meta-science
Cool poster!

If the effect is robust, could you calculate the financial costs of increasing the statistical power in ESM studies by 1%? (And then perhaps create a little interactive ✨ Shiny app ✨ so researchers could calculate it themselves?)
October 3, 2025 at 4:04 PM
Is there any documented procedure how to implement this for multilevel cfa?
October 1, 2025 at 7:04 PM
If anyone has any ideas how the items could be improved, please let us know.

I'm not planning on improving them, but someone else might 🙃
September 29, 2025 at 7:29 PM
Sorry...except for the momentary quality of online solitude. That scale doesn't work at all :)
September 29, 2025 at 7:29 PM
Our interpretation is that the scales aren't completely useless, but they need revision, if anyone wants to use them.

See the preprint yourself: osf.io/preprints/ps...
OSF
osf.io
September 29, 2025 at 7:29 PM
When we assessed the scales' measurement invariance across several groups, we found that none of the scales function equally among the tested subpopulations.
September 29, 2025 at 7:29 PM
We used multilevel confirmatory factor analyses first on the whole sample (N = 1,913 adolescents), which yielded positively looking results. However...
September 29, 2025 at 7:29 PM
We assessed the structural validity of four ESM scales measuring: the quality of current social company, the quality of current online company, the quality of in-person solitude, and (!) the quality of online solitude.
September 29, 2025 at 7:29 PM
For anyone interested in reading it, here is the link:
benjaminkunc.substack.com/p/farewell-d...

I will be glad for any thoughts you might have. Enjoy!
Farewell, dear psych
I wish you all the best. But I really need to go.
benjaminkunc.substack.com
August 26, 2025 at 5:38 PM
Later, I found myself coming back to some of the thoughts I've had about the current state of psychology and its metascience. Since I wanted the post to be a conclusion of my psychological journey, I felt I needed to write it all down.
August 26, 2025 at 5:38 PM
When I attempted to write down the reasoning behind it, I realized it is too long for a regular LinkedIn post (or even a bluesky thread!). As a result, I ended up with a full Substack blog consisting of two main parts. The first part is about (you guessed it) why I dropped the PhD.
August 26, 2025 at 5:38 PM
The decision to drop my PhD might be surprising to some of you who weren't lucky enough to run away before I started rambling about methodology and psychological metascience.
August 26, 2025 at 5:38 PM
First of all, I want to thank for the invaluable supervision and advice I got from Olivia J Kirtley, Gudrun Eisele, and Ginette Lafit throughout my PhD, and the whole Centre for Contextual Psychiatry for the opportunity to work with such amazing colleagues.
August 26, 2025 at 5:38 PM
"Altogether, these findings point to the strength of most contemporary psychological research and suggest academic incentives have begun to promote such research. However, there remain key questions about the extent to which robustness is truly valued compared with other research aspects."
June 2, 2025 at 12:18 PM
Wow. The correlation of replication success with IF seems to be positive, while it's negative for citations. That's the opposite of what I expected.
April 11, 2025 at 7:34 AM
IMHO, many researchers (implicitly) assume that positive results imply a successful measurement process, leading to the intuition that a thorough validation is unnecessary in such cases.

This would be reasonable, if we could trust our findings. Which doesn's seem to be the case. 4/4
April 3, 2025 at 7:51 AM
There's also a slightly edgy take on finding positive results and the criterion validity of the scales used. 3/4
April 3, 2025 at 7:51 AM
One of the points is that if one is about to commit factor analysis, it's best to first check the validity evidence based on content and response processes. Otherwise, one could end up with compelling, yet meaningless, statistical results. 2/4
April 3, 2025 at 7:51 AM
Reposted by Benjamin Kunc
People have already blamed science reform for what is happening.

For 15 years I have said: If we do not get our shit together (less publication bias, higher quality, more coordination) someone else is going to implement change top down, and we are not going to like how they do it.

And here we are.
February 19, 2025 at 4:59 AM