#Inference
Forbes estimates OpenAI is blowing $15m a day on Sora. Sure, why not? I bet OpenAI’s inference costs are absolutely horrifying
www.forbes.com/sites/phoebe...
Here’s How Much Cash OpenAI Is Burning On AI Video App Sora. What It Means
Some back-of-napkin math suggests OpenAI is spending more than a quarter of what it’s making to power the AI slop factory.
www.forbes.com
November 11, 2025 at 3:18 AM
"Schrödinger's causal inference" (n):

The practice of making causal claims or interpretations within a scientific article - typically in the title, abstract, implications, or conclusion - while simultaneously warning that the study design is unsuitable for causal inference.
November 11, 2025 at 11:36 AM
Great example of Schrödinger's Causal Inference!

Title states that multilingualism "protects" against accelerated aging. Discussion states that the study design "does not establish causality" and "proper causal inference would require experimental, quasi-experimental or intervention-based designs"
November 11, 2025 at 10:34 AM
A lil' look at what's in the kitchen at the moment. Spent the day making this very high level visual of LLM inference.
November 10, 2025 at 6:13 PM
typo, inference*
November 8, 2025 at 9:59 PM
I think the only reasonable inference you can make from their actions is that Dem leadership likes Trump and wants to help him. The only reason that isn't obvious is the "well no that couldn't be true" factor
Democrats ran the table on Tuesday at state and local levels, getting not just its base excited but “normies” fed up over the last 10mo, so of course national Democrats want to do everything to pop the air in the balloon.
At this point, it’s clear that the Hamburglar, Dr Evil, and Sideshow Bob are advising Chuck Schumer.
November 9, 2025 at 9:38 PM
I am a data bro I am going to show you a graph and claim I am doing deep analysis while not saying anything or making any inference other than line go up or down you are all bad faith for not taking me serious
November 11, 2025 at 3:10 AM
Misinformation research has a causality problem: lab experiments are limited; observational studies confounded.

We used causal inference on 9.9M tweets, quantifying effects in the wild while blocking backdoor paths.

Does misinfo get higher engagement? Are following discussions more emotional? 🧵
OSF
osf.io
November 11, 2025 at 9:59 AM
My theory is that OpenAI's inference costs are much higher than expected. Every new GPU is marketed for inference, Stargate Abilene is full of inference-focused Blackwell GPUs, and OpenAI's latest releases have all been very, very inference intensive.
www.wheresyoured.at/where-is-ope...
November 7, 2025 at 5:35 PM
Doing non-causal inference (and being explicit about it), yet using a causal word as second word in the title.

If you pay Nature € 10.690, they will publish this in Nature Ageing.

I can tell you what I think of that for free.

www.nature.com/articles/s43...
November 11, 2025 at 7:58 AM
📣 Upcoming Lecture Series 2025: Casual Inference Methods for Real-World Data

🗣️ Dr. @georgiatomova.bsky.social of @clscohorts.bsky.social will present "Causal inference w/ compositional data".

📅 From 11h on 14 Nov. 2025 at @liser.lu

🍽️ Lunch registration is mandatory: lsurvey.lih.lu/index.php/44...
November 11, 2025 at 9:04 AM
Very interesting perspective! I often have the opposite thought: "Experimenting like a modeller"
Perhaps the two could co-exist for better workflows. Extensive exploration on simulated data should be the norm before running experiments to make sure the design is capable of the intended inference.
"Validate With Simulated Truth: A first habit is to test whether an analytical pipeline can recover known conditions."

Very good advice below. So much COVID nonsense (e.g. 'immunological dark matter') basically came down to a non-identifiable model that hadn't been properly tested.
Modelling Like an Experimentalist
Dahlin et al. (2024) apply experimental thinking to a model of mosquito-borne disease transmissions.
onlinelibrary.wiley.com
November 10, 2025 at 4:02 PM
I'm facilitating a causal inference reading group next semester for Sociology PhD students. (I will also be learning!) If there are (1) pedagogical articles or (2) empirical examples in soc that you ❤️, will you share in the comments? [And please RT to help me crowd-source!]
November 11, 2025 at 9:28 PM
So many nonsense ad hoc pipelines could be prevented by requiring that they work on synthetic data.

I tend to think of experiments as special cases of inference, since most of the problems I work on cannot be studied in experiments. But I get that many researchers see experiments as base analogy.
"Validate With Simulated Truth: A first habit is to test whether an analytical pipeline can recover known conditions."

Very good advice below. So much COVID nonsense (e.g. 'immunological dark matter') basically came down to a non-identifiable model that hadn't been properly tested.
Modelling Like an Experimentalist
Dahlin et al. (2024) apply experimental thinking to a model of mosquito-borne disease transmissions.
onlinelibrary.wiley.com
November 10, 2025 at 12:41 PM
Darren, by using appropriate statistics/study design/causal inference methods you make nice findings go away. Surely nobody wants that.
November 7, 2025 at 10:38 AM
OpenAI will need those 5 billion for like a week of Sora inference.
November 11, 2025 at 2:17 PM
Epic picture of @clanfear.bsky.social delivering a thunderous introduction to the potential outcomes framework in Leeds last week! Passion emanating from every fibre of his being.

This is what the causal inference movement needs. A joy to behold.
November 11, 2025 at 10:16 AM
notable: they ripped out the silicon that supports training

they say: “it’s the age of inference”

which, yeah, RL is mostly inference. Continual learning is almost all inference. Ambient agents, fast growing inference demands in general audiences

kartik343.wixstudio.com/blogorithm/p...
November 7, 2025 at 12:43 AM
Peter, this is just old fashioned claims black people are violent. It's not an inference about citizenship, it's just National Front style racism.
November 12, 2025 at 1:21 PM
This slide unfortunately generalizes well 🥲
November 11, 2025 at 9:25 AM
The advances we've made in statistics, experimental study design, and causal inference over the past century are remarkably useful for understanding our world. But there is never been a push to make people use them like we are seeing with generative AI. Perhaps take a moment to consider why.
November 7, 2025 at 9:07 AM
It is incredible to me how some people are claiming that they can't make a simple inference from this comment about who he voted for. I can't decide if they really can't tell, or they realize how fatally it would undermine their claim to be better Democratic voters than progressives are.
November 11, 2025 at 7:34 PM
I mean people are saying they are upset about inflation, I don't think the inference is any more complicated than that.
Trump becomes unpopular in 2017: ????

Trump stays unpopular in 2018-2020: ????

Biden becomes unpopular in 2021: ????

Biden stays unpopular in 2022-23: CLEARLY INFLATION

Biden stays unpopular in 2024: HANGOVER FROM INFLATION??

Trumps is unpopular in 2025: PROBABLY STILL INFLATION??

empiricism
November 7, 2025 at 3:36 AM
Mark your diaries! The ViCBiostat Summer School returns from 13-20 Feb 2026, in Melbourne and online.
Courses include causal inference, cluster randomised trials, meta-analysis and the estimand framework.
Further details TBA shortly - sign up to our mailing list at www.vicbiostat.org.au
#statistics
November 12, 2025 at 4:09 AM