Meta-scientist and psychologist. Senior lecturer @unibe.ch. Chief recommender @error.reviews. "Jumped up punk who hasn't earned his stripes." All views a product of my learning history.
When researchers combine methods or concepts, more out of convenience than any deep curiosity in the resulting research question, to create publishable units.
"What role does {my favourite construct} play in {task}?"
Reposted by Ian Hussey
Reposted by Ian Hussey
Reposted by Ian Hussey
I'm guessing this isn't news to everyone, but it was to me. Bizarre.
Reposted by Alexander Wuttke
journals.sagepub.com/doi/10.1177/...
The difference: this does not rely on magic beans or assumed omniscience, it is trained and validated against a large corpus of highly relevant data and makes specific predictions with known accuracy and precision.
journals.sagepub.com/doi/10.1177/...
Reposted by Alberto Acerbi
There is no way this could cause confusion in this heated space.
¯\_(ツ)_/¯
Reposted by Juan Ramón
What Psych101 core texts are left?
www.newyorker.com/magazine/202...
Reposted by Ian Hussey
👉 buff.ly/iGOCf7R
#research #science
Reposted by Ian Hussey, Juan Rocha
Reposted by Ian Hussey
Reposted by Ian Hussey
Also shows once more that providing statistical code should be mandatory for every single paper.
janhove.github.io/posts/2025-1...
Reposted by Brian A. Nosek, Ian Hussey
www.justice.gov/usao-ma/pr/d...
This study had chatGPT rate how central academic disciplines are to various constructs.
Why would chatGPT know this?
Where is the evidence its ratings are reliable or valid?
compass.onlinelibrary.wiley.com/doi/full/10....
Follow me down a rabbit hole I'm calling "doing science is tough and I'm so busy, can't we just make up participants?"
My fear is: will everyone 'collecting' silicon samples on Qualtrics realise that they have not collected data from real participants? Will some people not realise they have done 'ask-chatgpt-if-my-hypothesis-is-right' with extra steps?
Reposted by Robert Böhm, Brady T. West, Brendan Nyhan , and 43 more Robert Böhm, Brady T. West, Brendan Nyhan, Gordon Hodson, Carl T. Bergstrom, Richard Price, Steven Van de Walle, Mary Feeney, Jamie Morgan, Ian Hall, Fabián Muniesa, Jason Reifler, Kelly Smith, Ingo Rohlfing, Ian Hussey, James Grimmelmann, Smith, Rebecca Tushnet, David R. Miller, Patrick S. Forscher, Michelle L. Mazurek, Margot C. Finn, Michael D. McDonald, Sebastian Karcher, Martin Kreidl, Maria Abreu, Eric Hehman, Zen Faulkes, Joanna Tai, Madeleine Pownall, Damian Trilling, David Murakami Wood, Sofia Stathi, Johannes Breuer, Woodrow Hartzog, Greg Linden, Jack Stilgoe, Richard M. Carpiano, Aleksandra Urman, Juan Ramón, Gemma C. Sharp, Terry Gaasterland, Maksym Polyakov, Jake Anders, Jeannette Sutton, Béatrice Cointe
Follow me down a rabbit hole I'm calling "doing science is tough and I'm so busy, can't we just make up participants?"
Most of us are aware of inappropriately high rates of self-citation. I had never considered the opposite: that exceptionally low levels of self-citation could indicate papermill activity.
fosci.substack.com/p/self-citat...
Reposted by Ian Hussey, Paolo Crosetto
New blog post: "Does multilingualism really protect against accelerated ageing? Some critical comments"
janhove.github.io/posts/2025-1...