Jase Gehring
skyjase.bsky.social
Jase Gehring
@skyjase.bsky.social
scientist at UC Berkeley inventing advanced genomic technologies

lover of molecules, user of computers

https://scholar.google.com/citations?user=63ZRebIAAAAJ&hl=en
not to dunk on this conclusion, but it's an insidious mindset that has become pervasive. how you spend your time, how you make your money, is one of the most important ethical and political choices you can make.

convincing ourselves otherwise is arguably The Whole Problem in American society today
i honestly do not know what major company in america isnt involved in some ghastly shit at this point? i just would not *do* the ghastly shit yourself.
are we shaming people for having worked for companies that did ghastly shit elsewhere but far away organizationally because I'll be the first to say I ran a poop plant for a company that was/is major defense contractor and I'm pretty sure was involved with NEOM and *am* sure works with the UAE
November 10, 2025 at 8:07 PM
employers can flip the script and turn damaging layoffs into an AI productivity story. satisfying for now, maybe, but it's the kind of behavior that'll exacerbate a potential future collapse
There isn’t much evidence that layoffs are being caused by AI taking people’s jobs.

For big tech, such as Amazon’s recent layoffs, the reason seems to be a combination of cutting headcount to balance increased spending on AI infrastructure and anticipation of (not realized) AI productivity gains.
AI isn't replacing jobs. AI spending is
Big spending on artificial intelligence puts pressure on jobs, as gloomy narratives about the future of work are ironically making new graduates less employable.
www.fastcompany.com
November 10, 2025 at 4:15 PM
a race draws maximum investment early on. it’s an effective lever against the USG to deregulate US companies and restrictively regulate Chinese ones.

A race implies a winner, and the prize is Control.

It is not looking like an accurate description of AI progress.
not really sure why we think AI is a race exactly
November 8, 2025 at 3:36 PM
big moves!

so that's

Parse -> Qiagen
Scale -> 10X Genomics
Fluent -> Illumina

next-gen single-cell cell omics, capable of processing 10^6 to 10^8 cells

i wonder if what the near term looks like - will commercial kits stagnate while the buyers rebrand?
November 4, 2025 at 10:44 PM
o wow that James Zou paper was preprinted a year ago! i wonder if training on synthetic wrong answers could give an abundant, although sparse, training signal?

also, it looks to me like performance is not consistent with model scale or capability?

arxiv.org/abs/2410.21195
Belief in the Machine: Investigating Epistemological Blind Spots of Language Models
As language models (LMs) become integral to fields like healthcare, law, and journalism, their ability to differentiate between fact, belief, and knowledge is essential for reliable decision-making. F...
arxiv.org
November 4, 2025 at 5:03 PM
gonna have a hard time with this one at current LLM capabilities because www.nature.com/articles/s42...
November 4, 2025 at 3:24 PM
if you have used these systems for challenging work, I don’t know how you can come to this conclusion.

and the New Yorker is not your venue of choice. You want to do a blog post and try to use it to scruff up another $100B
“Perhaps the secrets of thinking are simpler than anyone expected—the kind of thing that a high schooler, or even a machine, could understand.”
The Case That A.I. Is Thinking
ChatGPT does not have an inner life. Yet it seems to know what it’s talking about.
www.newyorker.com
November 3, 2025 at 10:30 PM
Reposted by Jase Gehring
In which we all learn the phrase “ballistic microscopy”:
Blasting Through Cells
www.science.org
October 29, 2025 at 4:48 PM
AGI is such a funny concept to adopt. "no i mean, like, actually AI. like, actually"
October 29, 2025 at 3:33 AM
yes, this is the width of the bullseye hyperscalers need to hit to avoid bursting the bubble. They need LLMs to become much more effective (to create value) while keeping compute costs high (to capture value)
some of you are saying the same words and meaning very very opposite things
October 28, 2025 at 5:36 PM
so who's the uh Albert Speer in this situation?
Who knew White House was gutted/removated by Truman btwn ‘49 and ‘52?

I did. I’m a nerd. The renovation was necessary cuz it was on the verge of collapse.

It was done w great care and reverence for its historical significance

Trump’s “upgrade”, on the other hand, is motel 6 shit
October 22, 2025 at 4:42 AM
I often feel like OpenAI doesn’t understand the bet they’ve placed.
October 21, 2025 at 5:01 PM
This isn’t how you get to AGI. This is the path to a $10B LLM financial consultant, not a superintelligence worth trillions. This training regime doesn’t even make sense in a world with AGI on the horizon.
Anthropic is going after biomed research, OpenAI goes for finance
October 21, 2025 at 5:00 PM
the quality of an idea is very poorly correlated with how long it took you to come up with it

fairly well correlated with how much time you spend working on it
October 21, 2025 at 4:33 PM
we should try to make mirror life
Do you have any extremely niche, but serious, ethical stances?
October 20, 2025 at 4:08 PM
you have got to be kidding me.

(also love the effect on the sign that makes it look like a party invite)
An artillery shell fired during 250th anniversary celebration of the Marine Corps at Camp Pendleton on Saturday detonated prematurely over Interstate 5, damaging a California Highway Patrol vehicle on JD Vance's security detail www.nytimes.com/2025/10/19/u...
Artillery Shell Detonated Over Interstate 5 During Marines’ Celebration, California Officials Say
www.nytimes.com
October 19, 2025 at 9:39 PM
our communications system is terrible for every user. completely co-opted to blast spam, scrape info, sell ads, and run scams. so many of America’s problems encapsulated in this one slice of life
oh my god the spam calls why
October 18, 2025 at 5:55 PM
Dwarkesh-Katpathy interview was quite good. Andrej is such a clear thinker, very well positioned to offer an honest and informed AI outlook. I share his view on the state of the field, but I’m more pessimistic about the intersection of big tech, AI, and American capitalism
October 18, 2025 at 5:42 PM
allowing adult content means LLMs don't have the juice
October 15, 2025 at 5:20 PM
mess around in the lab or on the computer?
October 12, 2025 at 5:30 PM
you know i'm down bad when i'm scrolling the ResearchGate forums. just completely out of my element. flailing.
October 11, 2025 at 2:17 AM
Using LLMs for science is like scrolling posts on some niche labrat internet forum.

and not a good one like SeqAnswers. i'm talking some dicey ResearchGate advice. some r/DIYbio. like you almost wish you hadn't read it.
October 11, 2025 at 2:06 AM
"I’m talking about using AI to perform, direct, and improve upon nearly everything biologists do."

"my basic prediction is that AI-enabled biology and medicine will allow us to compress the progress that human biologists would have achieved over the next 50-100 years into 5-10 years."

Dario Amodei
October 11, 2025 at 1:59 AM
my goodness what a fear-mongering, over-the-top hype headline. really leaning into telling lies to sell newspapers
“In September, scientists at Stanford reported they had used A.I. to design a virus for the first time.”
Opinion | The A.I. Prompt That Could End the World
www.nytimes.com
October 10, 2025 at 10:05 PM
web developers using a stochastic generative algorithm:

omg it's alive u guys. it has rights
October 5, 2025 at 8:05 PM