Daniel Lakens
banner
lakens.bsky.social
Daniel Lakens
@lakens.bsky.social
Metascience, statistics, psychology, philosophy of science. Eindhoven University of Technology, The Netherlands. Omnia probate. 🇪🇺
Pinned
My paper on concerns about replicability, theorizing, relevance, generalizability, and methodology across 2 crises is now in press at the International Review of Social Psychology. After revisions it was 17500 words, so it is split in 2 parts: osf.io/dtvs7_v2 and osf.io/g6kja_v1
OSF
osf.io
A recent paper suggests we stop using the terms Registered Report and preregistration. This is a very bad idea by my fellow metascientists. There is no way I am ever gonna drop 'Registered Reports' for 'Two-stage review with in principle acceptance'

journals.sagepub.com/doi/10.1177/...
November 10, 2025 at 3:03 PM
Reposted by Daniel Lakens
My Shiny app containing 3530 Open Science blog posts discussing the replication crisis is updated - you can now use the SEARCH box. I fixed it as my new PhD Julia wanted to know who had called open scientists 'Methodological Terrorists' :) shiny.ieis.tue.nl/open_science...
Open Science Blog Browser
Open Science Blog Browser
shiny.ieis.tue.nl
November 8, 2025 at 7:15 PM
My Shiny app containing 3530 Open Science blog posts discussing the replication crisis is updated - you can now use the SEARCH box. I fixed it as my new PhD Julia wanted to know who had called open scientists 'Methodological Terrorists' :) shiny.ieis.tue.nl/open_science...
Open Science Blog Browser
Open Science Blog Browser
shiny.ieis.tue.nl
November 8, 2025 at 7:15 PM
Reposted by Daniel Lakens
Oops. Ooooooooooooops.

I do hope that nobody has been given or denied a job/promotion based on their SpringerNature citation counts in the past 15 years.

arxiv.org/pdf/2511.01675

h/t @nathlarigaldie.bsky.social
November 7, 2025 at 2:02 PM
This is why we have sample size justifications. Some people might mindlessly dismiss small N studies. But a well justified sample size (see online.ucpress.edu/collabra/art...) defends against such mindlessless.
November 7, 2025 at 4:49 PM
New 'Preliminary Report' submission format at Science and Medicine in Football. An idea that should be interesting for other journals! Explicitly make space for honestly reported smaller sample-size studies www.tandfonline.com/doi/full/10....
November 6, 2025 at 2:40 PM
Reposted by Daniel Lakens
@lakens.bsky.social pointed out on twitter that these text-mining studies (on mostly abstracts) are biased to overestimate the problem because exact test-statistics are more often reported for significant tests while non-significant tests are just described in words.
Look at the distribution of z-values from medical research!
November 5, 2025 at 1:28 PM
Reposted by Daniel Lakens
📚 Replication value as a function of citation impact and sample size: response to commentaries open.lnu.se/index.php/me...
October 30, 2025 at 1:15 PM
Reposted by Daniel Lakens
Really enjoyed this!

I esp loved the discussion of reverse p-hacking as a means of purposely generating null results. I could picture this happening more as null results become more acceptable--it'd be yet another way of creating a "clear story." Might I suggest calling it: "p-stacking"? 😉
November 4, 2025 at 3:56 AM
Reposted by Daniel Lakens
Wouldn't it be great to gather top social science journal editors + experts on fraud-prevention to discuss better ways to fraud-proof our field @ the National Academies? This is happening! Step 1 is creating an organizing committee. Submit nominees by 11/7:
www.nationalacademies.org/our-work/enh...
www.nationalacademies.org
October 24, 2025 at 7:12 PM
Reposted by Daniel Lakens
The package formerly known as papercheck has changed its name to metacheck! We're checking more than just papers, with functions to assess OSF projects, github repos, and AsPredicted pre-registrations, with more being developed all the time.

scienceverse.github.io/metacheck/
Check Research Outputs for Best Practices
A modular, extendable system for automatically checking research outputs for best practices using text search, R code, and/or (optional) LLM queries.
scienceverse.github.io
November 3, 2025 at 4:20 PM
Went on a trip to Paris for the day, giving a talk at Université Paris Cité, saying hi to former lab visitor Ethan Meimoun, and we even had time to pick up gluten free goodies for my wife at Copains on the way!
November 3, 2025 at 4:23 PM
Reposted by Daniel Lakens
Just discovered this great preprint by
@crist14n.bsky.social @lakens.bsky.social

sportrxiv.org/index.php/se...

I wish someone had given me tutorials/easy-to-understand explanations like these when I was doing my bachelor's degree!
What is your hypothesis? : On the importance of knowing your hypothesis before conducting a hypothesis test | SportRxiv
sportrxiv.org
November 2, 2025 at 9:34 AM
Reposted by Daniel Lakens
Relatedly, I think SESOI is a tremendously useful and under-appreciated concept. It's become a regular tool in my power analysis workflows, and I wish I understood it sooner in my career.
Great points about the clarity the SESOI brings in this context
There still seems to be a lot of confusion about significance testing in psych. No, p-values *don’t* become useless at large N. This flawed point also used to be framed as "too much power". But power isn't the problem – it's 1) unbalanced error rates and 2) the (lack of a) SESOI. 1/ >
October 31, 2025 at 3:42 PM
Part 2 of our episode on p-hacking is out! Tune in to the latest Nullius In Verba episode here:

nulliusinverba.podbean.com/e/p-hacking-...
Episode 69: Fraus P-Valoris - II | Nullius in Verba
In this episode, we continue the discussion on p-hacking. Were the accusations of p-hacking valid? And how can one avoid said accusations? What are the reasons for p-hacking? And what are some…
nulliusinverba.podbean.com
October 31, 2025 at 5:29 PM
Reposted by Daniel Lakens
Tenzing will also gently encourage best practices for Acknowledgments sections, by modeling them in its outputs.
We are interested in talking to groups (like statistical consulting units or library professionals) who are frequently not acknowledged appropriately in papers about how this might help.
October 30, 2025 at 9:50 PM
Reposted by Daniel Lakens
New paper finds that selective reporting remains the most replicable finding in science: journals.sagepub.com/doi/full/10.... I especially like their new exploratory metric 'p-values per participant'. Some papers had 11 p-values per participant! 🤯
Sage Journals: Discover world-class research
Subscription and open access journals from Sage, the world's leading independent academic publisher.
journals.sagepub.com
October 31, 2025 at 7:39 AM
Reposted by Daniel Lakens
Statistical tests (and thus p-values, if you use frequentist methods) are useful if you want to test a hypothesis or make a dichotomous claim (for a nice overview, see doi.org/10.1177/0959... by @uyguntunc.bsky.social et al.), regardless of whether N is small or large. >
The epistemic and pragmatic function of dichotomous claims based on statistical hypothesis tests - Duygu Uygun Tunç, Mehmet Necip Tunç, Daniël Lakens, 2023
Researchers commonly make dichotomous claims based on continuous test statistics. Many have branded the practice as a misuse of statistics and criticize scienti...
doi.org
October 31, 2025 at 8:13 AM
Reposted by Daniel Lakens
5% is quite a lot if you think about it. Huge N gives you the luxury to reduce alpha by a lot and still keep very high power. E.g., alpha = 0.5% (0.005) would give you 98.8% power for the same effect (in a t-test). The best balance depends on the cost of each error type, see tinyurl.com/yut35b3u >
Justify Your Alpha by Minimizing or Balancing Error Rates
A preprint ("Justify Your Alpha: A Primer on Two Practical Approaches") that extends the ideas in this blog post is available at: https://ps...
tinyurl.com
October 31, 2025 at 8:13 AM
Reposted by Daniel Lakens
There still seems to be a lot of confusion about significance testing in psych. No, p-values *don’t* become useless at large N. This flawed point also used to be framed as "too much power". But power isn't the problem – it's 1) unbalanced error rates and 2) the (lack of a) SESOI. 1/ >
But here's, the thing, p values and significance become useless at such large sample sizes. When you're dividing the coefficient by the SE and the sample size is in the tens of thousands, EVERYTHING IS SIGNIFICANT. All you're testing is whether the coefficient is different than zero.
October 31, 2025 at 8:13 AM
New paper finds that selective reporting remains the most replicable finding in science: journals.sagepub.com/doi/full/10.... I especially like their new exploratory metric 'p-values per participant'. Some papers had 11 p-values per participant! 🤯
Sage Journals: Discover world-class research
Subscription and open access journals from Sage, the world's leading independent academic publisher.
journals.sagepub.com
October 31, 2025 at 7:39 AM
Exciting new addition to the AsPredicted and ResearchBox research infrastructure toolbox: AsCollected ascollected.org A platform to document results provenance - where did data come from, who collected it, and who cleaned and analyzed it. So important!
Home | AsCollected
ascollected.org
October 31, 2025 at 7:28 AM
Reposted by Daniel Lakens
New Commentary published in the German Journal for Sport and Exercise Research, emphasizing not only the need to specify a SESOI when testing a claim, but recommending also to start a discussion about best practices to determine a SESOI
link.springer.com/article/10.1...
From best practices to severe testing: A methodological response to Büsch and Loffing (2024) - German Journal of Exercise and Sport Research
This commentary builds on the Büsch and Loffing (2024) exploration of methodological best practices for validly evaluating intervention studies. Extending their perspective, it is argued that research...
link.springer.com
October 30, 2025 at 9:43 PM
Reposted by Daniel Lakens
Looking for help from the #psychology #metascience communities!

This www.theoryfinder.com/theory-repos...
online repository lists more than 200 theories (*), mostly from psychology. The authors' goal is to foster the use of theory ... I'd like some vetting of these theories. How do we do this?
Theories - Theory Repository
Disclaimer:
www.theoryfinder.com
October 30, 2025 at 10:33 AM
Are you interested in thinking about which studies are worth replicating? Then you have 10 articles to dig into in Meta-Psychology, representing a very wide range of viewpoint on this topic, out now: open.lnu.se/index.php/me...
LnuOpen | Meta-Psychology
Original articles
open.lnu.se
October 30, 2025 at 3:43 PM