Ingo Rohlfing
banner
ingorohlfing.bsky.social
Ingo Rohlfing
@ingorohlfing.bsky.social

I am here for all interesting and funny posts on the social sciences, broadly understood and including open science and meta science, academia, teaching and research. https://linktr.ee/ingorohlfing

Political science 30%
Sociology 17%
Doing non-causal inference (and being explicit about it), yet using a causal word as second word in the title.

If you pay Nature € 10.690, they will publish this in Nature Ageing.

I can tell you what I think of that for free.

www.nature.com/articles/s43...

The added value is marginal compared to several blog posts on the topic because the product does not seem in any way superior to alternatives. Let's pass the benefit of a doubt here because the article states there is no conflict of interest. 2/

True, the use cases and statistical concepts are very basic. I cannot believe that this is all they have to offer, but why not show it when one has more to offer? Speculation is futile, but let me speculate: I am a bit surprised such a basic article promoting a product has been published 1/

cover to approx. 95% what one needs for introductory statistics, and you don't have to worry about prompting and hallucinations.
For graduate training, it may be different, but I would then rather try Positron with LLM-assistance than a prompt-only interface. 4/

I don't know about alternatives that would work better, but I guess there are some.
2) More importantly, I am not confident there is sufficient added value. For undergraduate training, JASP or Jamovi should work well with their GUI. (I use JASP in my course for illustration.) They are free, 3/

to me what the added value is in 2025. It discusses standard prompting w/o any insights into how students liked LLM-based training or so.
Two other points:
1) If one uses an LLM, I am not sure why one would use Julius. The free account does not suffice for teaching bc of a prompt limit. 2/

Harnessing generative artificial intelligence for teaching statistics in medical research: Strategies for accurate hypothesis testing
onlinelibrary.wiley.com/doi/full/10....
Using julius.ai, the article describes how to use LLMs in a stats class. I don't want to sound harsh, but it is not clear 1/

Is this speaking from experience? If the sample size is derived from a smallest effect of substantive interest, or an otherwise well-justified effect size, this should not be a point of concern.
In practice, I can imagine reviewers getting caught up on N = 34 no matter what and say it is too small.

Reposted by Ingo Rohlfing

Also not sure about #2 - at least in public health and other policy relevant fields. Often what policymakers need most is a really good review article. If we want evidence to inform policy specifically reducing that type of study wouldn't be where I'd start.
I am slow to react to this recent Stockholm Declaration on scientific publishing. A lot of it sounds good, but I don't see how we get from here to there. I worry nothing substantial will happen until the cost disease kills the host.

Reposted by Ingo Rohlfing

We might ask whether we still need data visualization training when we have powerful LLMs to help us. Certainly, LLMs can help to optimize our code. But without a profound understanding of the produced code, we run the risk of creating figures that may look nice but that misrepresent our data. (6/n)

In this sense, it is also not an optimal constellation. It is related to the idea that science advances one funeral at a time. www.aeaweb.org/articles?id=... Of course, this is only my impression and I am not naming any subfield or names here, but this seems to be more than just a "corner issue".

and lift each other's publications into reputable journals. This is then less a waste of resources compared to what you are adressing. Still, these are closed shops that are more or less immune from severe criticism and stifle theoretical or methodological innovation on their topic. 2/

My sense is this is not just a phenomenon in certain corners of science (or rather, academia). I think sometimes there are cliques closer to the core, meaning they work on relevant topics with, say, solid methods. From the outside, it looks they happen to review each other's papers 1/
🧵There's this phenomenon you sometimes see in certain corners of science where a small group of researchers all work on the same narrow topic and mostly just talk to each other. It becomes a really insular community: everyone cites everyone else, ... 1/5

Reposted by Ingo Rohlfing

A spicier opinion is that academia taking back publishing is not necessarily a path to innovation and efficiency. Do you associate universities with efficiency? The problem with for-profit publishing is not the profit, it's the oligopoly power of major publishers. Anti-trust in our lifetime?

Reposted by Ingo Rohlfing

Hört hört: "Deutsche Forschungsgemeinschaft will Daten aus US-Clouds holen" | heise online
www.heise.de/news/US-Clou...
@dfg.de kündigt eigene Förderlinie an, um Forschungsdaten sicherer auf europäischer Cloud lagern zu können.

If the author identifies as politically left, it would support his point, wouldn't it?
Regardless, this is neither theoretically nor empirically plausible and convincing

I had forgotten about this, but makes sense. Besides everything else, referencing a conversation with you without permission is bad. Based on the abstract, there appears to be a wide gulf between the strength of the claims and the strength of the evidence.

Whether a discipline is homogeneous on the left or side of spectrum does not seem to matter, though the title emphasizes this. There is an empirical analysis in the paper, but I doubt it is close to being conclusive and counterarguments against the causal chain come to mind. 2/

The methodological stagnation of #sociology is related to its left-wing skew
link.springer.com/article/10.1... #MetaScience I don't have access, but there is a detailed abstract. Argument goes political homogeneity => no truth-seeking => no interest in rigor and openness 1/

Dealing with 1 color #dataviz
www.matplotlib-journey.com/module2/deal... Very useful intro to the different ways of picking a color. Didn't know so far what RGB stands for and how hEX codes work. Plus there is the HSV method combining color, saturation and brightness.

Reposted by Ingo Rohlfing

My theory is that for practitioners, regression models should be like pocket money: you get a fixed number per week to do whatever you like with until we're sure you won't blow the whole lot on silly stuff, get caught up in a get-rich-quick scheme, or accidentally leave them in a drawer somewhere.
This paper’s been popping as “evidence” that you can’t do real #causalinference w/ obs data. To me it shows you need rigorous pre-specified design (in addition to the willingness to fold when your hypothesis is not possible to answer with the data at hand). #EpiSky, #CausalSky, #AcademicSky

what the basis for the rankings was. The survey asked whether one had submitted to a journal, published in it, or reviewed for it. I don't know why this was asked for, as I don't think it is related to reputation (except maybe for having published in a journal) 2/

A couple of weeks ago, I took part in a survey of journal reputation. Don't remember who runs it, but it was sponsored by the ECPR, I think. I may find the email again.
On that note: One had to rank 50 journals or so with option not to rank at all. Personally, I'd have found it interesting to ask 1/
arXiv will no longer accept review articles and position papers unless they have been accepted at a journal or a conference and complete successful peer review.

This is due to being overwhelmed by a hundreds of AI generated papers a month.

Yet another open submission process killed by LLMs.
Attention Authors: Updated Practice for Review Articles and Position Papers in arXiv CS Category – arXiv blog
blog.arxiv.org
Better get your paper submissions in for the @epssnet.bsky.social conference in June 2026, as the deadline is 7 November, and we have no intention of extended that (given the submission numbers)!
epssnet.org/belfast-2026...
Call for Papers | EPSS Belfast 2026 Conference
Submit your abstract or full paper for EPSS Belfast 2026. Share cutting‑edge political science research, network with peers & contribute to academic impact.
epssnet.org

Reposted by Ingo Rohlfing

A useful discussion tool is: At what point would you expect your peers (or whoever your target audience might be, such as members of the public) to throw rotten tomatoes at you for bragging about your tiny effect size? At what point would you be embarrassed?

So gehen alle beschädigt aus dem Verfahren - Projektbeteiligte und Ministerium - und es wird dem Projekt immer nachhängen, egal, wie es umgesetzt wird und was die Ergebnisse sind. 2/

Reposted by Claudia Diehl

Trotz Kritik: Forschungsministerium fördert umstrittenes Projekt gegen Antisemitismus - correctiv.org
correctiv.org/aktuelles/in...
Das Ministerium hat u.a. ein "conditional accept" ausgesprochen, bevor es die externe Zweitbegutachtung gab und die Bewilligung gegen Bedenken durchgezogen. 1/