Anselm Levskaya
banner
levskaya.bsky.social
Anselm Levskaya
@levskaya.bsky.social
Doing spooky things with linear algebra.
Pinned
The boosters of AI safety bills like SB1047 claim that open models will enable the production of biological weapons.

These claims are delusional. As a synthetic biologist and LLM engineer, I felt compelled to write why for anyone who might care:
dreamofmachin.es/machine_prop...
The backers of California’s SB1047 routinely cite AI-enabled bioweapons as a threat justifying the radical regulatory regime that places a locus of liability on general computational models, rather than on particular dangerous applications or criminal acts.
dreamofmachin.es
Reposted by Anselm Levskaya
My "early-career" developer feelings are complicated and alienating to SFBA-type career ladder people.
June 4, 2025 at 7:20 PM
Reposted by Anselm Levskaya
If your test for ground-breaking discoveries can’t detect the discovery of RNAi, or of CRISPR-Cas9, or the cryoEM resolution revolution or Alphafold2, maybe it’s not a very good test.
www.nature.com/articles/d41...
May 23, 2025 at 5:32 PM
Reposted by Anselm Levskaya
At some point, the fact that over a billion people use this technology and that they self-report high utility has to mean something.

There is lots to criticize about AI and plenty of real issues caused by AI, but the narrative that this is all a fake thing that will disappear doesn't help anyone.
May 20, 2025 at 3:54 PM
Reposted by Anselm Levskaya
Today, we’re announcing the preview release of ty, an extremely fast type checker and language server for Python, written in Rust.

In early testing, it's 10x, 50x, even 100x faster than existing type checkers. (We've seen >600x speed-ups over Mypy in some real-world projects.)
May 13, 2025 at 5:00 PM
Reposted by Anselm Levskaya
This isn't true. I'm the person who ran the experiments this is BSing about. When search results are worse, people attempt fewer tasks. When they're better they attempt more.
May 12, 2025 at 1:05 AM
Reposted by Anselm Levskaya
LET’S FUCKING GOOOO THE MOST AMBITIOUS HOUSING BILL IN THE HISTORY OF THE STATE OF CALIFORNIA HAS ADVANCED OUT OF COMMITTEE
April 23, 2025 at 12:40 AM
Reposted by Anselm Levskaya
@zey.bsky.social I’m not on the site with Nazis anymore — wtf are you saying about me over there?

You know there’s new data on the origins question, right?
April 12, 2025 at 7:08 PM
Reposted by Anselm Levskaya
The case for "lab leak" book "VIRAL" by Alina Chan and Matt Ridley leaned heavily on two facts in 2021:
1. The Wuhan Institute of Virology had sampled the virus most identical to SARS-CoV-2
2. SARS-CoV-2 lineage B, but not lineage A, was found in Huanan market

By 2022, neither was true, so...
April 11, 2025 at 4:32 PM
Reposted by Anselm Levskaya
My first outing of "The Unbearable Slowness of Being" at the Caltech Chen Neuroscience Workshop today! What does living at 10 bits/s mean for humans, flies, mice, and crows? More here: jieyusz.github.io/talks/2025_c...
Thanks to Profs. @cfcamerer.bsky.social & Carlos for the invite!
April 12, 2025 at 3:49 AM
Reposted by Anselm Levskaya
There it is again: using PLMs to predict antigen-epitope interactions from sequence alone yields a prediction accuracy of 0.65, in line with a proposed upper limit from a previous study QTed below (from doi.org/10.1101/2025.02.12.637989; I deleted an older version of this post due to typos/errors)
March 3, 2025 at 5:45 AM
Reposted by Anselm Levskaya
After 6+ months in the making and over a year of GPU compute, we're excited to release the "Ultra-Scale Playbook": hf.co/spaces/nanot...

A book to learn all about 5D parallelism, ZeRO, CUDA kernels, how/why overlap compute & coms with theory, motivation, interactive plots and 4000+ experiments!
The Ultra-Scale Playbook - a Hugging Face Space by nanotron
The ultimate guide to training LLM on large GPU Clusters
hf.co
February 19, 2025 at 6:10 PM
Reposted by Anselm Levskaya
Ceding techno optimism to the right is a generational scale mistake
January 26, 2025 at 2:33 PM
Reposted by Anselm Levskaya
AI is revolutionary but it also elicits some of the dumbest LinkedIn takes of all time.

I’d like this lawyer whose primary expertise is scaremongering for EA money to explain how using “future kinds of models” starts pandemics.

Outline the model to pandemic pipeline for me.
February 17, 2025 at 1:05 PM
Reposted by Anselm Levskaya
Well said, @carlbergstrom.com.

I also feel the dismantling of our scientific institutions & funding agencies for basic science is an attack on all scientists, wherever they might be (government or corporate lab, academic institution, ...).

Our collective identities are about advancing knowledge.
Thinking back on all this I better understand the pain I feel to see science under devastating attack here. It’s not just about my livelihood or my university. It’s about my identity. And it’s about a pursuit that I see as standing along with art, literature, and music as among our highest callings.
February 9, 2025 at 3:50 AM
Reposted by Anselm Levskaya
Another example of what cutting science funding is doing to our leading university research programs in the U.S.: dismantling things like the Soybean Innovation Lab at UIUC, which have made US crop yields dramatically higher.
February 9, 2025 at 7:14 AM
Reposted by Anselm Levskaya
Our online book on systems principles of LLM scaling is live at jax-ml.github.io/scaling-book/

We hope that it helps you make the most of your computing resources. Enjoy!
February 4, 2025 at 6:59 PM
Reposted by Anselm Levskaya
Y'all are just noticing that scientific societies are obsequious cowards on everything except preserving their publishing cash cows?
February 4, 2025 at 6:50 PM
Reposted by Anselm Levskaya
Making LLMs run efficiently can feel scary, but scaling isn’t magic, it’s math! We wanted to demystify the “systems view” of LLMs and wrote a little textbook called “How To Scale Your Model” which we’re releasing today. 1/n
February 4, 2025 at 6:54 PM
Reposted by Anselm Levskaya
"When conspiracy theories and nonsense cures are widely accepted, the evidence-based concepts of guilt and criminality vanish quickly too."

www.theatlantic.com/magazine/arc...
The New Rasputins
Anti-science mysticism is enabling autocracy around the globe.
www.theatlantic.com
January 7, 2025 at 2:29 PM
Reposted by Anselm Levskaya
Re-upping this reply thread from last night. Drugs don't come from nowhere, folks. And we're not ripping off the NIH, either.
I think you're describing science, IMO. Everything is atop a foundation of basic research. An example: at a previous company, I worked on a series of small-molecule inhibitors of Hormone-Sensitive Lipase, a possible target for Type II diabetes and related conditions. (1/10)
January 6, 2025 at 2:36 PM
Reposted by Anselm Levskaya
There's been a bunch of claims (mostly on X) that ChatGPT did great on this year's Putnam math competition. Let's do a thread to talk about it! 🧵

#MathSky
December 20, 2024 at 2:55 AM
Reposted by Anselm Levskaya
Attached is a piece of art by me for "The Unbearable Slowness of Being: Why do we live at 10 bits/s?"

Hope this will inspire you to think about the brain from a new perspective!
December 17, 2024 at 6:58 PM
Reposted by Anselm Levskaya
AI amplifying biorisk has been a major focus in AI policy & governance work. Is the spotlight merited?

Our recent cross-institutional work asks: Does the available evidence match the current level of attention?

📜 arxiv.org/abs/2412.01946
December 4, 2024 at 5:05 AM
Reposted by Anselm Levskaya
In early phase drug discovery, biology and assays are make-or-break. I can remember very few programs I worked on which were hampered by the inability to make molecules, but plenty that were hampered by the inability to capture disease biology complexity on a chip or in an enzyme readout.
December 15, 2024 at 10:51 PM
Reposted by Anselm Levskaya
All western US water issues are entirely caused by the cultivation of cash crops in places that they shouldn't be grown. Everything, literally everything else, is a rounding error
if one more person says that AI uses up fresh water i am going to have a fucking stroke
December 12, 2024 at 2:45 AM