Vilgot Huhn
banner
vilgothuhn.bsky.social
Vilgot Huhn
@vilgothuhn.bsky.social
Confused PhD student in psychology at Karolinska Institutet, Stockholm. GAD, ICBT, mechanisms of change. Organizing the ReproducibiliTea JC at KI.
Website: https://vilgot-huhn.github.io/mywebsite/
Personal blog at unconfusion.substack.com
how is it even possible that outlooks search function is this bad can I please just get emails that contain the matching string in chronological order you’re a trillion dollar company ?
November 11, 2025 at 10:41 AM
Reposted by Vilgot Huhn
Glad to see Elsevier is investing in important things like suggesting readers that AI reads the articles for them. Nothing screams quality like claiming reading their articles is wasted time
November 6, 2025 at 10:15 AM
Reposted by Vilgot Huhn
I think this is kind of neat and I don't think anyone else has noticed it (I've looked and I can't find anyone who has) osf.io/preprints/so...

Maybe I should back off "justification" language, but it's at least a remarkable coincidence. I still think someone else *must* have noticed it...
October 24, 2025 at 12:23 PM
@lakens.bsky.social pointed out on twitter that these text-mining studies (on mostly abstracts) are biased to overestimate the problem because exact test-statistics are more often reported for significant tests while non-significant tests are just described in words.
Look at the distribution of z-values from medical research!
November 5, 2025 at 1:28 PM
As an undergraduate I was taught Popper’s criticism of psychoanalysis, summarized as ”psychoanalysis makes no falsifiable predictions”, but in hindsight I think that was more a critique of how Freudians themselves responded to counterexamples. Surely the theories makes (some) predictions?
November 3, 2025 at 6:48 AM
There’s a fuzzy line between being smart and insider trading and thus it should be made illegal.
November 1, 2025 at 5:44 PM
I try to not US politics post too much (unless it unless it relates to science) but I find this development very interesting and worrying. I hope these negative polarization doom spirals only happen in two-party systems.
This is the sound of candidates losing the struggle against the crushing weight of partisan gravity.

This is nationalization and polarization and presidentialization swallowing everything else.

This is the dangerous collapse of dimensionality, in one chart
leedrutman.substack.com/p/the-modera...
October 30, 2025 at 2:58 PM
I’ve taken up @wiringthebrain.bsky.social book ”Free Agents: How evolution gave us free will” again and am becoming increasingly convinced Sapolsky (at best) only skimmed the book before their debate.
October 30, 2025 at 11:22 AM
Admitting that you’re stupid can be like a superpower in a lot of academia tbh. Not stupid as in ”I am incapable of ever understanding” but just don’t pretend you understand out of embarrassment.
October 30, 2025 at 11:13 AM
It could turn out ”intelligence” is more like doing a specific type of action accurately, and much how a perfectly accurate dart-throwing robot isn’t that much more dangerous than a very accurate dart-throwing robot, ”superintelligence” just doesn’t do that much.
October 30, 2025 at 8:59 AM
asteriskmag.com/issues/12-bo...

Thought this quote summarized the tensions and contradictions around psychiatric diagnoses in an illuminating way. Good nuanced article. I have my own unstructured thoughts on the subject (as do most clinicians, I think).
October 28, 2025 at 6:51 PM
Some months later I've still not really resolved this tension. If you have a lot of data you get estimates similar to frequentist methods, but beware those estimates have a super duper different meaning in terms of statistical philosophy! #stats #rstats
Feel like I’ve come across people advocating for bayesian methods who want it both ways when it comes to priors.

They’re both super important if you want the real deal p(H|D)😎 but also like don’t worry about it too much! we use wide uncertain priors, the likelihood will dominate anyways 😉
October 28, 2025 at 8:52 AM
Knowing that Elon has been trying to fine-tune grok to have a ”conservative”/”anti-woke” bias, I find this genuinely scary. The last time it started calling itself Mecha Hitler etc, but presumably future attempts will be more subtle.
Brace for another wave of x refuges
October 26, 2025 at 4:14 PM
Reposted by Vilgot Huhn
People seem to be discussing multiverse analysis again! I haven’t read the latest piece but will use that as an opportunity to share a blog post of mine with a title of which I’m still proud (although I probably shouldn’t)

www.the100.ci/2021/03/07/m...
Mülltiverse Analysis
Psychologists like their analyses like I like my coffee: robusta. Results shouldn’t change too much, no matter which exclusion criteria are applied, which covariates are included, which transformation...
www.the100.ci
October 26, 2025 at 5:06 AM
There’s a lot of bad things to say about mental health/illness discourse online nowadays but personally I still remember it as somehow worse when I was younger. Like I haven’t seen extended debates on ”is everyone actually faking it?” in a long time.
October 24, 2025 at 6:31 PM
My got o example to generate LLM hallucinations used to be to ask them about somewhat obscure graphic novels, but trying it now I found Claude is able to see if it such information exists in the training data and go into search mode instead. An interesting development imo.
October 20, 2025 at 5:36 PM
Lots of food for thought in this talk, though in hindsight I realize it was directed at people more expert in causal inference than me. I'm far away from a position to provide resources/commentary/tutorials/templates. On the other hand I feel inspired to try my best to be a positive example.
Happy to announce that I'll give a talk on how we can make rigorous causal inference more mainstream 📈

You can sign up for the Zoom link here: tinyurl.com/CIIG-JuliaRo...
October 20, 2025 at 3:44 PM
Stealing crown jewels from The Louvre has great romantic heist energy and joie de vivre, but like what do you do after? who’s the buyer for that sort of stuff?
October 19, 2025 at 5:13 PM
I consider generating AI images/video/voices of the (recently) dead to be a form of corpse desecration, unless the deceased explicitly asked for it.
Do you have any extremely niche, but serious, ethical stances?
October 17, 2025 at 4:50 PM
I'm taking a course on open science right now, which is very fun and inspiring, but there has been a jarring contrast between the well intentioned intra-academia discussions on how to best improve science during the day and doomscrolling about measles being back during the evening.
October 16, 2025 at 1:25 PM
I feel like "replicated across cultures" is a bit of an awkward phrase for observational data (for example "those high in neuroticism sleep worse, also in Japan"), but I've seen it a few times. Is there a word as snappy as "replicated" for like "re-observed"?
October 16, 2025 at 9:42 AM
I generally don’t know what to make of the AI field but I definitely find the technology, especially image and video generation, very surprising. If someone where to describe the technical details beforehand and ask me guess whether it would work I would have said ”no way”.
October 15, 2025 at 9:17 AM
Reposted by Vilgot Huhn
Notably this problem is also solved 100% without any AI and instead, at zero cost, by journals adopting format-agnostic initial submissions (as many journals have already done)
Finally, someone has solved a real problem with AI! No more having to take a paper in the format for a journal that rejected you, and reformat it for a new journal. Well done!! formatmypaper.com
October 15, 2025 at 6:58 AM
You know I think you should have the decency to be a bit uncomfortable with the moral conundrums we face in life, that’s all.
October 14, 2025 at 9:16 PM