Ian Hussey
banner
ianhussey.mmmdata.io
Ian Hussey
@ianhussey.mmmdata.io

Meta-scientist and psychologist. Senior lecturer @unibe.ch‬. Chief recommender @error.reviews. "Jumped up punk who hasn't earned his stripes." All views a product of my learning history.

Psychology 66%
Sociology 10%
Pinned
Lego Science is research driven by modular convenience.

When researchers combine methods or concepts, more out of convenience than any deep curiosity in the resulting research question, to create publishable units.

"What role does {my favourite construct} play in {task}?"

None of this is incompatible with my statement. Are you arguing for a lack of caution? Strange position to take in science.

I think once we find out that someone has fabricated much of their work, we should be very careful to treat the remainder of their work as trustworthy.

Reposted by Ian Hussey

Anyone check in on Johnny Haidt to see if he thinks Bari Weiss still embodies the “telos of truth”

Your last saved meme is your moral philosophy

Reposted by Ian Hussey

🚨🚨 ATTENTION: I’d like to announce that I, unilaterally but bindingly, have changed the name from “Spearman’s rho” to “Nivard-Spearman’s rho”. I’ll be in talks with package and software maintainers to organize a smooth transfer to the new, more appropriate, terminology.

Reposted by Ian Hussey

I regularly cite Prinz et al.(www.nature.com/articles/nrd...) as a ref for low replicability in (non-psych) preclinical research. Believe it or not: They've omitted which studies they attempted to replicate!

I'm guessing this isn't news to everyone, but it was to me. Bizarre.
Believe it or not: how much can we rely on published data on potential drug targets? - Nature Reviews Drug Discovery
Nature Reviews Drug Discovery - Believe it or not: how much can we rely on published data on potential drug targets?
www.nature.com

I’ll send one in future 😉

Reposted by Alexander Wuttke

I’m vocally skeptical of silicon samples, yet vocally impressed by SurveyBot3000.

The difference: this does not rely on magic beans or assumed omniscience, it is trained and validated against a large corpus of highly relevant data and makes specific predictions with known accuracy and precision.
Finally, @bjoernhommel.bsky.social's and my paper introducing the SurveyBot3000 is officially out in AMPPS. It's a fine-tuned language model that guesstimates correlations between survey items from text alone. Not perfectly, but useful for search, for example.
journals.sagepub.com/doi/10.1177/...
Finally, @bjoernhommel.bsky.social's and my paper introducing the SurveyBot3000 is officially out in AMPPS. It's a fine-tuned language model that guesstimates correlations between survey items from text alone. Not perfectly, but useful for search, for example.
journals.sagepub.com/doi/10.1177/...

The fact that you have written about thus seriously elsewhere makes it more likely that this piece is taken seriously. That makes it worse, not better. How is the reader to know to read it as if you have your fingers crossed while writing it?

Reposted by Alberto Acerbi

Author of “How the phone ban saved high school" clarifies that the article is not meant to imply that the phone ban has saved high school.

There is no way this could cause confusion in this heated space.
¯\_(ツ)_/¯

Reposted by Juan Ramón

Oliver Sacks admitted his case studies in The Man Who Mistook His Wife For A Hat were fraudulent “fairy tales”.

What Psych101 core texts are left?

www.newyorker.com/magazine/202...

Reposted by Ian Hussey

The SNSF is considering restrictions for its Project funding. Goal: to ensure the quality of evaluation & stable success rates despite an increasing number of applications and limited funding. The final decision will be made in January 2026.

👉 buff.ly/iGOCf7R

#research #science
Project funding restrictions envisaged
The SNSF is responding to increased demand from researchers in order to ensure evaluation quality and stabilise success rates.
buff.ly
I am just learning of this 2015 retraction, adding to my "science as amateur software engineering" files. Seems they classified missing values as obs outcome of interest (divorce). Classified 32% of sample divorced, rather than true 5%. retractionwatch.com/2015/07/21/t...

Reposted by Ian Hussey

While we are unable to offer publication in Nature Aging, we believe your manuscript may be suitable for our Gold Open Access sister journal.

I wish we could publish graphical abstracts with no article, and therefore make memes like this citable objects.

Reposted by Ian Hussey

Thanks @janhove.bsky.social for taking the time to check the paper — something that paid editors at Nature, as well as unpaid reviewers — apparently did not.

Also shows once more that providing statistical code should be mandatory for every single paper.

janhove.github.io/posts/2025-1...
Jan Vanhove :: Blog - Does multilingualism really protect against accelerated ageing? Some critical comments
janhove.github.io

This was a contentious debate in suicide research. I couldn’t convince several people it was the case.
Dana-Farber Cancer Institute Agrees to Pay $15 Million to Settle Fraud Allegations Related to Scientific Research Grants. The relator [Sholto David] will receive $2,625,000 under today’s settlement.
www.justice.gov/usao-ma/pr/d...
Dana-Farber Cancer Institute Agrees to Pay $15 Million to Settle Fraud Allegations Related to Scientific Research Grants
BOSTON – Dana-Farber Cancer Institute, Inc. (Dana-Farber) has agreed to pay $15 million to resolve allegations that, between 2014 and 2024, it made materially false statements and certifications relat...
www.justice.gov

already happening in similar forms by big name folk bsky.app/profile/ianh...
Many psychologists are treating LLMs as if they are the mind of god.

This study had chatGPT rate how central academic disciplines are to various constructs.

Why would chatGPT know this?

Where is the evidence its ratings are reliable or valid?

compass.onlinelibrary.wiley.com/doi/full/10....

Needs hallucinated emojis in 1% of cases

Their site says it has been trained on “thousands” of human participants.. which seems very low for such a product?

This is news to me.

My fear is: will everyone 'collecting' silicon samples on Qualtrics realise that they have not collected data from real participants? Will some people not realise they have done 'ask-chatgpt-if-my-hypothesis-is-right' with extra steps?
Did you know that from tomorrow, Qualtrics is offering synthetic panels (AI-generated participants)?

Follow me down a rabbit hole I'm calling "doing science is tough and I'm so busy, can't we just make up participants?"
Did you know that from tomorrow, Qualtrics is offering synthetic panels (AI-generated participants)?

Follow me down a rabbit hole I'm calling "doing science is tough and I'm so busy, can't we just make up participants?"

If you read the post, you’ll see these questions are answered there.

These authors are not a priori at fault, but they represent a useful lead for examining that work further from a forensic scientiometrics perspective.

I think you miss the point. It is obvious that researchers cite themselves. So it is therefore not obvious why an established researcher would never ever cite themselves in any of their articles. That’s what the 2% represents.

Only about 2% of researchers have never cited their own work.

Most of us are aware of inappropriately high rates of self-citation. I had never considered the opposite: that exceptionally low levels of self-citation could indicate papermill activity.

fosci.substack.com/p/self-citat...
Self-citation patterns among researchers
What's normal? What's unusual?
fosci.substack.com
A recent study purports to have found that multilingualism protects against accelerated ageing. I've taken a closer look at it, and it doesn't look good.

New blog post: "Does multilingualism really protect against accelerated ageing? Some critical comments"
janhove.github.io/posts/2025-1...

I don't care what it did to our political landscape, that sounds delicious. Worth it.