Veli-Matti Karhulahti
mkarhulahti.bsky.social
Veli-Matti Karhulahti
@mkarhulahti.bsky.social

science, gaming, art (senior researcher at university of jyväskylä)

Psychology 27%
Sociology 20%

Can they argue the ad is for single-player as long as multiplayer not mentioned? (assuming some purchases will be offered in single mode too tho)
📢 Register for the 5th Helsinki Initiative webinar (8 December) on Multilingualism in Scholarly Communication with presentations by @tatsuya-amano.bsky.social, @karenstroobants.bsky.social and Andre Brasil!

More information and registration: www.helsinki-initiative.org/en/events/5t...

One of my all-time fav rants on this topic-- especially love this figure demonstrating how expert clinicians fail to agree on major depression diagnosis most of the time (57%) based on DSM5 field trials

currently editors (handling tons of papers) must heavily trust reviewers as they cannot be experts in everything-- a move toward more distributed editorial labor (in exchange for less reviewing) expects more human scrutiny from topic-fit editors who'd then also manage less papers on average

The reason why this is interesting & maybe even promising is: it isn't simply "less human scrutiny" but but a shift from reviewer trust to editor trust--
In a future publishing system, qed + editor could certainly replace "reviewers+editor" somewhat. Editors could still call in expert reviewers when they feel it's needed.

But replacing "2 reviewers + 1 editor" with "1 reviewer + qed + 1 editor" would probably give similar results.

10/n
In a future publishing system, qed + editor could certainly replace "reviewers+editor" somewhat. Editors could still call in expert reviewers when they feel it's needed.

But replacing "2 reviewers + 1 editor" with "1 reviewer + qed + 1 editor" would probably give similar results.

10/n

attention to alternatives is good as it contributes to gradual, slow changes that over years (decades) can lead to system level changes too-- but it's those institutions that offer alternatives which need to become more sustainable, visible, and "prestige" for progress to keep happening

Having a reward like this is ok to increase visibility of alternatives but it also struggles to address the real problem: the system remains broken bc contracts (=lives) of many ppl globally depend on publishing in metrics journals, they don't have a choice
A >$10,000 award that will be given to multiple individuals who "communicate their work with radical transparency, making it easy for others to use, test, and build on their ideas" outside of journals, in an ongoing project.

We also need awards to the many communities (not individuals) already
Deadline today!!!
A >$10,000 award that will be given to multiple individuals who "communicate their work with radical transparency, making it easy for others to use, test, and build on their ideas" outside of journals, in an ongoing project.

We also need awards to the many communities (not individuals) already
Deadline today!!!
Some scientists aren’t waiting for journals to catch up. They’re showing us what’s next.

The Beyond the Journal awards honor those breaking the mold in how science is shared.

More details: pracheeac.substack.com/p/off-roadin...

Nominate or self-nominate here: www.experiment.foundation/beyond

the right solution would be ofc to go back to the drawing board and figure out what's the state of art in theory & practice, but the structures we have don't allow it: authors need to get their paper out to satisfy the funder who gave money to do the flawed test 🫠

As RR editor/reviewer I've linked this paper to authors many times when they start with a 12-hypotheses testing plan-- alas, it isn't merely a H-testing issue but usually reflects how research programs are broken deep down, trying to ask RQs that simply cannot be answered by any effect size

Thought about this plenty over the last years; imo it's already a huge step to actively reflect on it-- grey areas will always be massive & impossible to justify clearcut lines, but openly disclosing humility for effect meaning immediately increases my trust for authors/results

I've met some of the folks running it & they seemed professionals with shared values-- planning to submit my own next ms there ( they don't have much marketing power so it's a clear tradeoff for less reach but i can afford it at this point)

Many ppl think of registered reports as a tool for bias control (for good historic reasons) but ime this is the truly useful part of RRs: get feedback on the *design* before it's too late

--not specific to h-testing but applies to any kind of data, method, or study in general
The reality is a lot of parts of a research project are determined in the *design,* not in the analysis.

That's why serious projects have lit reviews and unserious projects have "our stakeholders liked these words"

yeah, how they highlight working with original authors may also imply resistance toward work that doesn't-- in theory, the eLife model could be optimal for replication publishing as it does offer a platform for voicing multiple different interpretations simultaneously

I agree with the post but my guess is the mechanism will be the opposite: selection bias will favour failed replications, as they represent the more interesting results in this case
The reality is a lot of parts of a research project are determined in the *design,* not in the analysis.

That's why serious projects have lit reviews and unserious projects have "our stakeholders liked these words"

The #1 psychiatry journal, world psychiatry, also doesn't allow preprints
My most popular cartoon by a long chalk is also oddly niche, in that it mostly appeals to a generation who remember the song it refers to and enjoy the nostalgia for their youth. I can draw one for you if you like.

www.worldofmoose.com/collections/...

re preprints: yes, they're great in many ways, but with a few caveats!

bsky.app/profile/mkar...
over the years i've gradually come to change my view on preprints, which now is: they're harmful & doing net damage (unpopular opinion but there are actual reasons /3
It wasn't even a paper! It was a blog post! (a pdf on the pre-print server ArXiv). I love ArXiv and preprints (and also the entire field of economic publishing) but this is the downside: blog posts picked up in the media as ready for public consumption, when it's not. gizmodo.com/mit-backs-aw...

imo the COI part is extremely complex & disclosures won't do much-- not least bc in tech there are numerous in-built structural biases that don't show in COIs, and as your paper points out too, it can be simply self-defeating for a career to publish certain results

bsky.app/profile/mkar...
How COIs operate in social science & humanities is a huge topic that few talk about, not least as it's damn difficult grasp the tons of variation in RQs, epistemologies, and histories of fields --
There are very little norms about this in social science IME. At least in my areas of it, industry funding is, comparatively, so rare, that it doesn't figure prominently in training or guidance, so people literally don't know whether to disclose affiliation or funding source twice.

Happy that someone wrote the paper, it puts together many things oft-discussed but not addressed. Btw perhaps the recent history of gambling was left out intentionally but if not, you'll find this interesting

bsky.app/profile/mkar...
Finally found time to read Rebecca Cassidy et al's eye-opening investigation "Producing gambling research". This is true meta-science, probably the best report I've ever read. Every scientific field should have one 1/2

www.gold.ac.uk/media/docume...
www.gold.ac.uk

whenever we see publishers doing things that may look like as if they're taking responsibility, it's specifically the "performative act" (similar as described in the paper) that creates an illusion of action, diverts attention & fools the community (v successfully so far)

meanwhile, as is well known, they do everything to avoid expensive labor & negative publicity like corrections, investigations, and retractions that end up harming stock price-- again, perfectly logical for the business model so cannot blame them

in fact, publishing "big papers" from tech companies (+collabs) benefits publishers massively, as their brands & flagship journals run on such headlines news-- it makes no sense for them to complicate these process (eg by demanding IRBs when they don't need to)

Elsevier (part of information company RELX), Springer Nature etc are publicly traded businesses that itself operate in a similar tech media industry-- we don't even have to enter the COI mess to note they have zero reason to help fix any of these issues

Reposted by Carl T. Bergstrom

The good kind of provocative perspective worth reading by HCI/games tech folks too but want to add one thing, which is hinted there but could be even more explicit: the role of publishers in this actor network
1. We ( @jbakcoleman.bsky.social, @cailinmeister.bsky.social, @jevinwest.bsky.social, and I) have a new preprint up on the arXiv.

There we explore how social media companies and other online information technology firms are able to manipulate scientific research about the effects of their products.

I guess the publisher needs to compensate for the 10 years of work... oh wait the publisher didn't put any work on it
1. We ( @jbakcoleman.bsky.social, @cailinmeister.bsky.social, @jevinwest.bsky.social, and I) have a new preprint up on the arXiv.

There we explore how social media companies and other online information technology firms are able to manipulate scientific research about the effects of their products.