Olmo van den Akker
banner
denolmo.bsky.social
Olmo van den Akker
@denolmo.bsky.social
Postdoc @ QUEST Center for Responsible Research & Tilburg University. Doing meta-research aimed at improving preregistration, secondary data analysis, and peer review.
Proposal to use more nicknames when talking about scientific researchers, the fun of which is nicely illustrated by James "cheaters' bane" Heathers in his acknowledgement slide.

#AIMOS2025
@jamesheathers.bsky.social
November 20, 2025 at 2:57 AM
Slide by Lisa Bero on commercial funding of research. No further comment necessary, I think.

#AIMOS2025
November 18, 2025 at 11:46 PM
There is also a publish-review-curate publishing platform specifically dedicated to meta-research: metaror.org

Send your studies on peer review there and be part of the future of science!

(CoI statement: I'm an ERC representative at MetaROR)

#PRC10
September 4, 2025 at 7:40 PM
eLife (talk by Nicola Adamson) uses a publish-review-curate method and uses common terms to assess manuscripts.

For strength of evidence: exceptional, compelling, convincing, solid, incomplete, & inadequate

For significance of findings: landmark, fundamental, important, valuable, & useful

#PRC10
September 4, 2025 at 7:17 PM
New peer review dataset incoming!

Involves authors, topic area, editorial decision, author characteristics (institutional prestige, region, gender), BoRE evaluations, review characteristics (length, sentiment, z-score, reviewer gender).

(Talk by Aaron Clauset)

#PRC10
September 4, 2025 at 7:01 PM
Christos Kotanidis checked differences in abstracts between submissions and published papers & assessed whether these differences indicated higher or lower research quality.

Abstracts typically improved, especially in big five medical journals. Evidence for the effectiveness of peer review?

#PRC10
September 4, 2025 at 6:59 PM
Andrea Corvillon on distributed vs. panel peer review at the ALMA Observatory:

Most experienced PIs no longer have the best ranks in a distributed review system, but why that is remains unclear.

#PRC10
September 4, 2025 at 4:33 PM
Interesting to see that the conference review process (and publishing norms) are do different in the field of computer science compared to other fields.

How do these differences come about? Fundamental differences between fields or chance and inertia?

#PRC10
September 4, 2025 at 4:20 PM
Alexander Goldberg did it by a 7-point Likert scale for overall review quality but also by assessing 4 sub-categories: reviewers' understanding of the paper, whether important elements were covered, whether reviewers substantiated their comments, and the constructiveness of reviewer comments.
September 4, 2025 at 4:16 PM
Di Girolamo explains why the use of the phrase "to our knowledge" lacks reproducibility and accountability.

Good trigger to make an edit in a grant proposal I'm writing.

#PRC10
September 4, 2025 at 3:04 PM
Note by Yulin Yu: Data repurposing may serve as an essential driving mechanism driving scientific innovation BUT may not always garner immediate recognition.
September 4, 2025 at 2:41 PM
Data repurposing: taking existing data and reusing it for a different purpose.

(Presentation by Yulin Yu)

Studies repurposing data are at higher risk of bias, so make sure to preregister them (check here for a template): research.tilburguniversity.edu/en/publicati...

#PRC10
September 4, 2025 at 2:40 PM
Ian Bulovic used OpenAI's GPT to assess selective outcome reporting.

Findings:
- Much outcome switching but decrease over time
- Industry-sponsored trials most at risk
- Assessing outcome switching may seem trivial but is even hard for human coders

#PRC10
September 4, 2025 at 2:19 PM
A meta-perspective by Malcolm Macleod on the presentations at #PRC10.

Are we going for low hanging fruit too much in research on peer review / publication?
September 4, 2025 at 1:23 PM
Leslie McIntosh:

Markers of (dis)trust in science: Pay attention to email addresses (use of hotmail.com and underscores) and institutional affiliations (new and unknown organizations without verifiable addresses)

#PRC10
September 3, 2025 at 4:55 PM
How do paper mills operate? I always thought they were in cahoots with illegitimate journals but apparently they target normal journals as well, with the editors of those journals playing no role.

(Talk by Tim Kersjes from Springer Nature)

Could open review reports solve this issue?

#PRC10
September 3, 2025 at 3:39 PM
Findings from Mario's study:
- Open reviews include more sentences, mainly involving suggestions and solutions indicating more constructive reviews
- Open reviews had higher information content scores

His explanation: There is more accountability in an open system

#PRC10
September 3, 2025 at 2:11 PM
Start of the talk by @mariomalicki.bsky.social: "Only 0.1% of journals provide some sort of open peer review"

Shockingly low number for something that should be standard given that reviews are part of the scientific discourse and therefore should be public.

#PRC10
September 3, 2025 at 2:11 PM
What do researchers use AI for?

Talk by Isamme Al Fayyad on the first study from his PhD @maastrichtu.bsky.social

#PRC10
September 3, 2025 at 1:44 PM
July 1, 2025 at 11:55 AM
Hot topic at the moment: AI in peer review.

Ashia Livaudais from startup SymbyAI discussed this during a session on peer review at #metascience2025

Conclusion 1: the reviews of human-AI combinations were rated higher in quality than reviews of humans or AI by themselves
July 1, 2025 at 11:55 AM
I'm getting the chills from this invite 🥶🥶🥶
April 1, 2025 at 11:14 AM