Brian Nosek
banner
briannosek.bsky.social
Brian Nosek
@briannosek.bsky.social

Co-founder of Project Implicit, Society for Improving Psychological Science, and the Center for Open Science; Professor at the University of Virginia

Brian Arthur Nosek is an American social-cognitive psychologist, professor of psychology at the University of Virginia, and the co-founder and director of the Center for Open Science. He also co-founded the Society for the Improvement of Psychological Science and Project Implicit. He has been on the faculty of the University of Virginia since 2002. .. more

Psychology 27%
Political science 17%

Thanks for the heads up. I’ll leave it for the moment, if only as self reminder of vulnerability to insufficient attention to evidence appraisal.

Reposted by Brian A. Nosek

New preprint from COS and @ip4os.eu: a summary of a small circle meetup on open science held at the Metascience Conference in London (1 July 2025). It captures key themes of the discussion—open science priorities, policy developments, and what's on the horizon.

Read more: osf.io/preprints/me...
OSF
osf.io

Reposted by Brian A. Nosek

Reposted by Brian A. Nosek

AMPPS recently accepted a new paper "Registered Replication Report: Johns, Schmader, & Martens (2005)" on stereotype threat (osf.io/preprints/ps...); full paper is forthcoming on our website). AMPPS is now soliciting Commentary articles on this paper. @psychscience.bsky.social /cont
OSF
osf.io

I don't believe so, but it might be possible that some candidate datasets have been identified that need to be assessed. You can pose specific questions to replications@cos.io and the team members closest to this can provide definitive answers.

<thank goodness he only took Spearman’s when Pearson’s was sitting right there>

One pathway to addressing some AI slop might be reverse citation searches of fake citations to identify the offending papers.
And so checked out Google Scholar. Now on my profile it doesn't appear, but somwhow on Nelli's it does and ... and ... omg, IT'S BEEN CITED 42 TIMES almost exlusively in papers about AI in education from this year alone... scholar.google.com.vn/citations?vi...
scholar.google.com.vn
And so checked out Google Scholar. Now on my profile it doesn't appear, but somwhow on Nelli's it does and ... and ... omg, IT'S BEEN CITED 42 TIMES almost exlusively in papers about AI in education from this year alone... scholar.google.com.vn/citations?vi...
scholar.google.com.vn

Having been on many search committees over the years. With hundreds of applicants, there are dozens that meet criteria on just the paper record, devoid of substance.

Hiring processes have plenty of biases and weaknesses, but failing to hire that guy who had a good record isn't one.

The "I didn't get a faculty job because of diversity goals" genre of posts is odd because they ironically miss that it is incredibly competitive, like sports.

When one's record looks like the player who shoots indiscriminately, plays no defense, and screams foul every time their stripped, well...

Join our Replicability Project: Health Behavior!

We have 55 replication studies underway, our target is 65-70.

We are only recruiting for secondary data replications--i.e., using existing data to test the original question.

Here's a list of studies we think could be feasible.

If interested...
Replications Sourcing Sheet
docs.google.com

May this holiday season be filled with unexpected cameos of replicability and data sleuth work everywhere you look.

www.youtube.com/watch?v=SskQ...
On the seventh day of Newtonmas my conscience said to me..
YouTube video by acapellascience
www.youtube.com

Thank you for this incredible service to the field!
🚨 Now out in Psych Science 🚨

We report an adversarial collaboration (with @donandrewmoore.bsky.social) testing whether overconfidence is genuinely a trait

The paper was led by Jabin Binnendyk & Sophia Li (who is fantastic and on the job market!) Free copy here: journals.sagepub.com/eprint/7JIYS...

Observation

Those most bullish on use of AI in authoring seem to define productivity in terms of generating papers, not ideas.

Those most bearish seems to define productivity in terms of ideas, not papers.

Those in between seem most focused on whether AI can help improve communication of ideas.
Dana-Farber Cancer Institute Agrees to Pay $15 Million to Settle Fraud Allegations Related to Scientific Research Grants. The relator [Sholto David] will receive $2,625,000 under today’s settlement.
www.justice.gov/usao-ma/pr/d...
Dana-Farber Cancer Institute Agrees to Pay $15 Million to Settle Fraud Allegations Related to Scientific Research Grants
BOSTON – Dana-Farber Cancer Institute, Inc. (Dana-Farber) has agreed to pay $15 million to resolve allegations that, between 2014 and 2024, it made materially false statements and certifications relat...
www.justice.gov
Rob Reiner (1947–2025)

There need to be more movies about villains who aren't very good at being villains.

Facebook memory reminders are sometimes unwelcome.

My understanding is that GS is reindexing them. I don’t know their ETA on reappearing. @olsonscholcomm.bsky.social

Way to go Calvin!

Reposted by Philip N. Cohen

I, for one, welcome our noble colleagues' willingness to absorb all the AI-generated content for the rest of the preprint servers.

Reposted by Brian A. Nosek

Good luck. Go with God (and take this shit with you)
A new preprint server welcomes papers written and reviewed by AI
With human peer review struggling to keep pace with machine-generated science, aiXiv enlists bots to help
www.science.org
So...my undergrad thesis student is doing a quality analysis of studies found in meta-analyses. She identified a few and we contacted the authors to request their effect sizes and other variables for the studies in their papers.

Here's what happened:

scientiapsychiatrica.com/index.php/Sc...
The Impact of Social Media on Adolescent Mental Health: A Meta-Analysis | Scientia Psychiatrica
Introduction: The proliferation of social media has raised significant concerns about its potential effects on the mental health of adolescents. This meta-analysis aims to provide a comprehensive asse...
scientiapsychiatrica.com

I have had good experience as editor for qualitative RR submission, I hope that more journals will experiment with it.

I am not sure, however, that I would call that weaponizing against mixed/qual as it is a novel (& untested) format, not a gatekeeping format for submitting to the journal.

The journal has a data sharing policy with no exceptions for privacy? That's bananas, and wouldn't just eliminate qual research, but many quantitative studies too!

I am also curious if there is evidence that esteem for qual and mixed methods has declined since the onset of the reform movement?

My completely idiosyncratic experience is that non-experimental and non-quantitative methods have, if anything, improved in stature in that time.

Can you give examples of mixed methods papers that should have been accepted at a prestigious journals but were denied because they didn’t meet a standard that was not relevant to the content?

Not doubting that cases exist, just wanting to understand some cases to learn.