Daniel Lakens
banner
lakens.bsky.social
Daniel Lakens
@lakens.bsky.social
Metascience, statistics, psychology, philosophy of science. Eindhoven University of Technology, The Netherlands. Omnia probate. 🇪🇺
Pinned
My paper on concerns about replicability, theorizing, relevance, generalizability, and methodology across 2 crises is now in press at the International Review of Social Psychology. After revisions it was 17500 words, so it is split in 2 parts: osf.io/dtvs7_v2 and osf.io/g6kja_v1
OSF
osf.io
Our podcast discussion on why anyone who tries to create more rigour needs to be called the 'police' is even more relevant today. Give it a listen!
New episode of Nullius In Verba! We discuss the jingle-jangle fallacy, the problem of vague concepts, how the incentive structures promote vagueness, why people who prefer more rigour have to be called the validity 'police', and much more!

nulliusinverba.podbean.com/e/episode-74...
Episode 74: Notiones Vague | Nullius in Verba
In this episode, we discuss the problems associated with vague concepts in psychological science. We talk about the jingle-jangle fallacy, the trade-off between broad concepts and more precise…
nulliusinverba.podbean.com
February 19, 2026 at 8:00 AM
If you want to recognize these bad faith actors, a clear giveaway is talking about Open Scientists or Metascientists as if they all think the same way, ignoring the huge diversity among people in this community. A paper that analyze this bad faith criticism: journals.sagepub.com/doi/10.1177/...
February 19, 2026 at 6:41 AM
The quality of the criticism on scientists who criticize their own field has seen a pernicious drop in quality in the last years. Unfair oversimplifications, bad faith arguments, and a lack of actual contributions has made it very easy to dismiss people who seem mainly driven by emotions.
Right, so, on a listserv for scientific journal editors someone just compared open science advocates to ICE. Good times.
February 19, 2026 at 5:40 AM
Reposted by Daniel Lakens
When I was an ECR life sometimes felt like a struggle to get senior scientists to accept OS practices like pre-reg. Now I'm the senior scientist I sometimes feel life is a struggle to get people to do pre-registration mindfully & meaningfully rather than just by rote. Papers like these help :)
If you set out to test a hypothesis, you should preregister it. If you deviate from a preregistration, report a table with all deviations, and evaluate the consequences for the validity and severity of the test. As a reviewer, ask for such a table!

online.ucpress.edu/collabra/art...
When and How to Deviate From a Preregistration
As the practice of preregistration becomes more common, researchers need guidance in how to report deviations from their preregistered statistical analysis plan. A principled approach to the use of…
online.ucpress.edu
February 17, 2026 at 5:53 PM
Reposted by Daniel Lakens
If you want to keep up to date with the latest news and events from the Irish Reproducibility Network, click on the link and join the mailing list!
irishrn.org
#irishResearch #openResearch
Home -
TRAIN We deliver workshops, short courses, and online materials that promote transparent methods and sustainable training through an institutional and train-the-trainer model. Learn More CONNECT We co...
irishrn.org
February 17, 2026 at 9:24 AM
Reposted by Daniel Lakens
First study from my PhD project is now published. It has been overwhelming, but extremely insightful! 👇🏻
@metahag.bsky.social had a paper published and I can't help myself from shouting about it! 📣

A scoping review, 952❗educational interventions, pupils with intellectual disability.

Found: lacking theoretical background and ethics board approvals, use of unstandardised tests
doi.org/10.1016/j.ij...
Redirecting
doi.org
February 17, 2026 at 11:19 AM
Currently a lot of attention is going to get people to preregister, to check preregistrations, and noting deviations. But the most important step will be evaluating the impact of the deviations! There is almost no work on this. Good topic for a young metascientist!
February 17, 2026 at 7:12 AM
An interesting Metascientific question is how we deal with deviations from preregistrations. Can we evaluate their impact on the severity of tests? If not, should studies with substantial deviations be reported as non-preregistered?
February 17, 2026 at 7:09 AM
If you set out to test a hypothesis, you should preregister it. If you deviate from a preregistration, report a table with all deviations, and evaluate the consequences for the validity and severity of the test. As a reviewer, ask for such a table!

online.ucpress.edu/collabra/art...
When and How to Deviate From a Preregistration
As the practice of preregistration becomes more common, researchers need guidance in how to report deviations from their preregistered statistical analysis plan. A principled approach to the use of…
online.ucpress.edu
February 17, 2026 at 7:07 AM
Reposted by Daniel Lakens
I did eventually contact someone at Elsevier regarding the apparent self-plagiarism. Roughly a couple months later (give or take), I did get a reply along the lines of "there is more duplicate content than we would want to see, but so what?" I guess I shouldn't be surprised.
February 17, 2026 at 5:06 AM
If you have never read the classic 'cargo cult science' speech by Feynman, you are missing out! You can listen to it, as Smriti reads it in full in the prologue of the 8th Nullius In Verba episode (almost 2 years ago!) nulliusinverba.podbean.com/e/cargo-cult...
Prologus 8: Cargo Cult Science (R.P. Feynman) | Nullius in Verba
In this bonus episode, we present a reading of the famous speech by physicist Richard Feynman on "science that isn't science," Cargo Cult Science, which will be the topic of the next episode. Enjoy.
nulliusinverba.podbean.com
February 16, 2026 at 4:35 PM
Reposted by Daniel Lakens
Revised version of our preprint on Rethinking Type S and M errors. Reviewers thought our criticism was reasonable and well-supported, but we have removed a figure, less attention to approaches to correct for bias, and strengthened our criticism. osf.io/preprints/ps...
OSF
osf.io
February 15, 2026 at 7:52 PM
Reposted by Daniel Lakens
Approximately 40% of all studies preregistered on the Open Science Framework are never shared publicly. That’s a lot. Researchers give several reason, and null results and bad planning (or lack of priority) are the main reasons.

journals.sagepub.com/doi/10.1177/...
February 14, 2026 at 6:13 PM
Revised version of our preprint on Rethinking Type S and M errors. Reviewers thought our criticism was reasonable and well-supported, but we have removed a figure, less attention to approaches to correct for bias, and strengthened our criticism. osf.io/preprints/ps...
OSF
osf.io
February 15, 2026 at 7:52 PM
Reposted by Daniel Lakens
Babbage, 1830, discussing the problem that scientists selectively report findings that they want to be true.

Confirmation bias is a strong human tendency. This is why we need to design science in a way that prevents conformation bias from leading us away from the truth.
February 14, 2026 at 4:32 PM
Reposted by Daniel Lakens
Our new episode on incentives in science is out now! We talk about the different kinds of incentives (prestige, awards, money, tenure, etc.)--when they help and how they might hurt science. nulliusinverba.podbean.com/e/incentives...
Episode 75: Incitamenta - I | Nullius in Verba
In this two-part episode, we discuss incentives in science and academia. We discuss the various incentives in science, including recognition, citations, money, and the kick in the discovery.   Shownot...
nulliusinverba.podbean.com
February 15, 2026 at 6:02 AM
Reposted by Daniel Lakens
Wild how economists and political scientists worry so much about unbiased tests **in their papers** and yet basically ignore how their journals filter on significance. Given our noisy tests, the latter creates huge bias away from zero.
September 10, 2025 at 3:00 PM
As long as scientists that think like Matt start their papers with 'We do not honestly present all results we find, and the result below is the strongest result we selected from an unknown number of studies that we performed' this is fine.

The problem is no one is that honest in their papers.
And to the point about “all nulls rejected in a paper” I do think the current production function in science is “present the strongest possible results that fit with your theory/interpretation.” Not ideal, but I think a reflection of the current adversarial nature of review/editing at journals
February 14, 2026 at 9:15 PM
Reposted by Daniel Lakens
I think this is a real difference between me feeling like social science is in a bad place and people that aren’t as upset with the situation.

I really don’t like being in a world where nobody believes anything we publish.
I am not as pessimistic or upset about the situation as the OP. I think most scholars internalize these issues and properly adjust by not believing almost any published study (except their own lol). We’re good little Bayesian boys and girls, imho
February 14, 2026 at 4:50 PM
Approximately 40% of all studies preregistered on the Open Science Framework are never shared publicly. That’s a lot. Researchers give several reason, and null results and bad planning (or lack of priority) are the main reasons.

journals.sagepub.com/doi/10.1177/...
February 14, 2026 at 6:13 PM
In standard scientific reports in psych 96% of first mentioned main hypotheses are supported. In Registered Reports, this is 46%. The 96% is clearly biased (given true H1 rate and power). Lack of transparency means we do not know the true baserate.

journals.sagepub.com/doi/10.1177/...
February 14, 2026 at 6:08 PM
Babbage, 1830, discussing the problem that scientists selectively report findings that they want to be true.

Confirmation bias is a strong human tendency. This is why we need to design science in a way that prevents conformation bias from leading us away from the truth.
February 14, 2026 at 4:32 PM
It has now become common to accuse metascientists, without any factual support, of the dumbest behavior on this platform. Of course, it is only a lot of the field, not all of it 🙄
February 14, 2026 at 2:30 PM
Reposted by Daniel Lakens
Exactly, we need meta-scientists who not only operate on an abstract level, but who are also involved in specific areas of research!
February 14, 2026 at 1:00 PM
What I find so impressive about @jamessteeleii.bsky.social is that he practices what he preaches. How many metascientists say 'we need better theory' without ever having improved a theory? How many say 'there is not one way to do good science' without showing us any way to do good science?
We continue with James Steele (wearing his 'stay calm and read Paul Meehl' shirt) who is in a high pace running us through his views on how we develop and test strong theories. #PSE8
February 14, 2026 at 10:17 AM