Dangerous_Chipmunk
banner
dangerouschipmunk.bsky.social
Dangerous_Chipmunk
@dangerouschipmunk.bsky.social
Robert R Butler III
Senior Research Scientist @ Stanford

I do genomics stuff with neurodegeneration & therapeutics.
Formerly, genomics stuff with neuropsychiatry.
Formerly (x2), genomics stuff with microbes (& humans/food).

posts are my own, etc.
Pish posh, bubble shmubble!
October 30, 2025 at 4:10 PM
This is a fascinating hypothesis that places ASD/SCZ in the context of the evolution of sapience with the rapid expansion of layer 2/3 neurons under fitness selection. That might inform an exposure risk, but I think ties that spectrum to the existence of humanity itself. 🧪🧬🖥️ doi.org/10.1093/molb...
October 5, 2025 at 9:44 PM
Nature, please stop hyping the Mechanical Turk. Humans at the AI Scientist built this non-agent (at best a workflow). They did it with full knowledge they are using works under CC-BY which means they intended to violate those terms. Central to authorship is culpability. Thus a bot can't author.
August 20, 2025 at 4:35 PM
So we chained the doctor to a radiator and fed it moldy bread & water and for some reason it had trouble focusing on work. But I think we have to be fair to our toy right? What WE decide is fair, without peer review, and hidden from our vague, toothless white paper.
microsoft.ai/new/the-path...
July 7, 2025 at 5:12 PM
[Doctor Kiosk inside Carl's Jr restroom]: You have exceeded your $2000 limit for care on this illness please die now or deposit two McLife tokens to continue...
microsoft.ai/new/the-path...
July 7, 2025 at 4:59 PM
My favorite alignment problem:
March 3, 2025 at 11:07 PM
A major problem in grading LLMs is that peer review is qualitative. Doing stats on opinions of peer-review doesn't solve that. I remember a news piece where a room of people read identical horoscopes they thought were tailored to them. Was it accurate? Most said yes.🧪 www.nature.com/articles/d41...
January 2, 2025 at 9:12 PM
My friend just said "this comic is my entire career". I would like to disagree, but I have to go define differential LR scores for an arbitrary number of exp. groups with an arbitrary number of spatial slides with an arbitrary number of celltypes with arbitrary combinations of ligands & receptors...
November 14, 2024 at 9:02 PM
Wow. That's a bananas way to announce your candidacy for Most toxic PI on campus...
November 11, 2024 at 5:27 PM
By a wide margin my favorite fake AI post. OpenAI should be desperate to claim it because that's about as authentic as dev responses get.
May 25, 2024 at 3:45 PM
With lots of posts of Google's AI nonsense, remember the fundamental alignment problem: these are language models, not knowledge models. It is not "just like Wikipedia" even though corpos desperately want you to think so. And pirates were not allowed in that f'ing bar I watched that episode again!
May 24, 2024 at 4:38 PM
It definitely isn't a good idea to speculate too much on such sparse data, but since they do, this kind of stands out:
November 14, 2023 at 7:22 PM
See there's the disconnect between someone with experiential knowledge, and a CIO who only see's shareholder $$$. And yet we know who is going to win out in the final iteration. In a year he will be saying 'maybe we can just spot check the medical field, I bet it's pretty accurate...'
September 12, 2023 at 4:43 PM
Sometimes bioinformatics is like the worst RPG 🧪🧬🖥️:
September 11, 2023 at 7:49 PM
You love to see it...
August 21, 2023 at 12:44 AM
It is a discord in genomics that bodes ill for precision medicine, often more motivated by revenue than actual need (primer article below). As a large amount of funding is available for D&I in research, a serious criterion for awards should be a concrete good that will benefit the studied group. 🧬🖥
August 14, 2023 at 12:34 AM
Oh good the closed-source, profit-driven LLMs will be myopic in their hallucinations. Remember folks, it doesn't matter what you say in your publication, only what the most popular algorithm thinks of the most popular papers in the last five years...
https://doi.org/10.1038/d41586-023-02470-3
August 3, 2023 at 10:53 PM