Nora Belrose
norabelrose.bsky.social
Nora Belrose
@norabelrose.bsky.social
AI, philosophy, spirituality

Head of interpretability research at EleutherAI, but posts are my own views, not Eleuther’s.
why don't more people become zoroastrian?

it's where judaism and christianity got the idea of ethical monotheism, afterlife, and final judgment but without any of their baggage

(no eternal hell, no historically questionable dogmas, etc.)
October 24, 2025 at 3:32 AM
If we care only about appearances, outcomes, and results then AI will outcompete humans at everything

If we care about the process used to create things then humans can still have jobs and meaningful lives

The idea that ends can be detached from means is the root of many evils
October 11, 2025 at 1:10 AM
Strongly agree with this bill https://www.usatoday.com/story/news/politics/2025/09/29/ohio-state-legislator-ban-people-marrying-ai/86427987007/
September 30, 2025 at 1:35 AM
if the laws of physics are fundamentally probabilistic, as they seem to be, that makes it easier to see how they can smoothly change over time
June 13, 2025 at 7:48 AM
data attribution is a special case of data causality:

estimating the causal effect of either learning or unlearning one datapoint (or set of datapoints) on the neural network's behavior on other datapoints
June 12, 2025 at 4:02 AM
Neural networks don't have organs.

They aren't made of fixed mechanisms.

They have flows of information and intensities of neural activity. They can't be organized into a set of parts with fixed functions.

In the words of Gilles Deleuze, they're bodies without organs (BwO).
March 27, 2025 at 7:11 PM
This seems like a cool way to use an adaptive amount of compute per token. I speculate that models like these will have more faithful CoT since they don't get to do "extra" reasoning on easy tokens https://arxiv.org/abs/2404.02258
Mixture-of-Depths: Dynamically allocating compute in...
Transformer-based language models spread FLOPs uniformly across input sequences. In this work we demonstrate that transformers can instead learn to dynamically allocate FLOPs (or compute) to...
arxiv.org
March 13, 2025 at 11:55 PM
Also chapter 10 where he discards the notion of the Soul but maintains the distinction between mind and brain
February 24, 2025 at 6:35 PM
William James did a lot of good philosophy of mind in chapters 1, 5, and 6 ofThe Principles of Psychology, we've barely made any progress in 135 years 😂
February 24, 2025 at 6:35 PM
I love this meme
February 22, 2025 at 5:33 AM
might interest @nabla_theta
February 7, 2025 at 12:32 AM
Pro tip: if you want to implement TopK SAEs efficiently, and don't want to deal with Triton, just use this function for the decoder, it's much faster than the naive dense matmul implementation
https://pytorch.org/docs/stable/generated/torch.nn.functional.embedding_bag.html
February 6, 2025 at 7:32 PM
Second, we speculate that complexity measures like this be useful for detecting undesired "extra reasoning" in deep nets. We want networks to be aligned with our values instinctively, without scheming about whether this would be consistent with some ulterior motive arxiv.org/abs/2311.08379
February 3, 2025 at 10:01 PM
We're interested in this line of work for two reasons:

First, it sheds light on how deep learning works. The "volume hypothesis" says DL is similar to randomly sampling a network from weight space that gets low training loss. But this can't be tested if we can't measure volume.
February 3, 2025 at 10:01 PM
We find that the probability of sampling a network at random— or local volume for short— decreases exponentially as the network is trained.

And networks which memorize their training data without generalizing have lower local volume— higher complexity— than generalizing ones.
February 3, 2025 at 10:01 PM
But the total volume can be strongly influenced by a small number of outlier directions, which are hard to sample in high dimension— think of a big, flat pancake.

Importance sampling using gradient info helps address this issue by making us more likely to sample outliers.
February 3, 2025 at 10:01 PM
It works by exploring random directions in weight space, starting from an "anchor" network.

The distance from the anchor to the edge of the region, along the random direction, gives us an estimate of how big (or how probable) the region is as a whole.
February 3, 2025 at 10:01 PM
My colleague Adam Scherlis and I developed a method for estimating the probability of sampling a neural network in a behaviorally-defined region from a Gaussian or uniform prior.

You can think of this as a measure of complexity: less probable, means more complex.
February 3, 2025 at 10:01 PM
What are the chances you'd get a fully functional language model by randomly guessing the weights?

We crunched the numbers and here's the answer:
February 3, 2025 at 10:01 PM
we have seven (!) papers lined up for release next week

you know you're on a roll when arxiv throttles you
February 2, 2025 at 3:35 AM
deepseek now largely replacing chatgpt for me
January 24, 2025 at 1:33 AM
Evolutionary biology can learn things from machine learning.

Natural selection alone doesn't explain "train-test" or "sim-to-real" generalization, which clearly happens.

At every level of organization, life can zero-shot adapt to novel situations. https://www.youtube.com/watch?v=jJ9O5H2AlWg
December 29, 2024 at 10:29 PM
Truth is relative, when it comes to the physical state of the universe.

But we should accept the existence of perspective-neutral facts about how perspectives relate to one another, to avoid vicious skeptical paradoxes. https://arxiv.org/abs/2410.13819
December 28, 2024 at 9:56 PM