Washington Irving
irvingwashington22.bsky.social
Washington Irving
@irvingwashington22.bsky.social
respect sampling variability
Back to the future!
November 22, 2025 at 5:19 PM
the obvious answer is that i am incorrect, uninformed, obstructionist, because otherwise these 10 prior papers from high impact journals would never have been published...
November 19, 2025 at 2:59 PM
try explaining immortal time bias to a bunch of ICU researchers. their response has invariably been to hand me a stack of papers studying the same condition where the clock started at ICU admission, regardless of if the pt had been in the hospital for 5 min or 5 days before going to the ICU.
November 19, 2025 at 2:57 PM
i'm sure it depends on the situation. but i agree that there is not a lot of critical thinking. and while sometimes overly compelx methods are not the answer (or even a smokescreen for junk data), sometimes they are needed, since--as you said--human health is complex.
November 19, 2025 at 2:52 PM
to wit, i am currently trying to convince a clinician that even tho a prior paper used "avg marginal effects", the quantity that they really want is is avg marginal effect at the mean. 🫠
November 19, 2025 at 2:46 PM
medical journal reviewers will scoff at an unfamiliar method. but if there is a ref to other papers that used similar methods, then everything is fine. in my own experience, this phenomenon permeates medical research field.
November 19, 2025 at 2:43 PM
2nd, and more importantly, is that they don't understand enough stats/methods/etc to really understand what they have, and whether or not they can even get what they (think) they want.
November 19, 2025 at 2:40 PM
to me, the reasoning is twofold. first, they see something that was published using method X and, ergo, another paper using method X is also likely to get published.
November 19, 2025 at 2:38 PM
agreed--great for teaching/learning. flat/default priors can nicely highlight how the likelihood/data dominates the posterior. it's also illustrative, imo, when a frequentist and bayesian model give similar results--really drives home the need to understand priors and how they influence posterior
November 18, 2025 at 4:25 PM
I have a serious love-hate relationship with linear algebra.
November 14, 2025 at 4:20 PM
This has become my default approach because the search function is so horrid.
November 14, 2025 at 4:19 PM
"linear in the predictors" is something i probably heard 500 times before someone explained it to me. and a few years passed before i truly understood it.

similar to "holding other predictors constant". a critical stats concept that sounds nice, but understanding it is maybe not so intuitive...
November 14, 2025 at 3:44 PM
‘Promiscuous dichotomization’ is a nice turn of phrase.
November 14, 2025 at 12:13 AM
What time is dinner? And also, what’s your address? I can bring a nice bottle of wine.
November 13, 2025 at 5:17 PM
I’m trying no to go full chicken little, but given how often I’ve gotten garbage results from LLM queries…
November 13, 2025 at 3:48 PM
Chat gpt fails the Turing test. Numerical integration is magic!
November 12, 2025 at 11:49 PM