banner
p-backup.bsky.social
@p-backup.bsky.social
Reposted
Fourth, some strange ideas on theory, which would if taken to their logical conclusion result in the end of being able to work theoretically, something already under threat (section 6 doi.org/10.31234/osf... also see table 2). Declaring a conflict of interest has almost become meaningless.

8/
October 4, 2025 at 6:24 AM
Reposted
Third, the peculiar idea that somehow we don't need to read, write, or perform literature reviews anymore; popping up like a satanic mushroom in almost all so-called OK uses of LLMs.

Companies writing our papers via their chatbots is not scientific at all. See section 5: doi.org/10.31234/osf...

7/
October 4, 2025 at 6:16 AM
Reposted
Second, if we uncritically adopt AI in psychology, we outsource programming of our experiments to companies. This undoes a lot of open source: experiments are under industry capture, written without formal specifications. It deskills us: psychologists spent the last decade learning to code.

6/
October 4, 2025 at 6:07 AM
Reposted
Without Critical AI Literacy (CAIL) in psychology (doi.org/10.31234/osf...) we risk the following:

1️⃣ misunderstanding statistical models, thinking correlation is causation;

2️⃣ confusing statistical models and cognitive models, undermining theory;

3️⃣ going against stated open science norms.

4/
October 4, 2025 at 5:51 AM