dribnet
banner
drib.net
dribnet
@drib.net
creations with code and networks
Data browser below (beware: the public text-to-prompt dataset may include questionable content). Calling this "DEI" is certainly a misnomer, but with SAE latents there's likely no word that exactly fits this "category" which is discovered by only unsupervised training. got.drib.net/maxacts/dei/
Maximum Activations: DEI
Gemma-2-2B: DEI
got.drib.net
February 5, 2025 at 10:29 AM
Finally I run a large multi-diffusion process placing each prompt where it landed in the umap cluster with a size proportional to the original cossim score - then composite that with the edge graph and overlay the circle. Here's a heatmap of where elements land alongside the completed version.
February 5, 2025 at 10:29 AM
I also pre-process the 250 prompts to which words within the prompts have high activations. These are normalized and the text is updated - here shown with {{brackets}}. This will trigger a downstream LoRA and influence coloring to highlight the relevant semantic elements (still very much a WIP).
February 5, 2025 at 10:29 AM
Next step is to cluster those top 250 prompts using this embedding representation. I use a customized umap which constrains the layout based on the cossim scores - the long tail extremes go in the center. This is consistent with mech-interp practice of focusing on the maximum activations.
February 5, 2025 at 10:29 AM
For now I'm using a dataset of 600k text-to-image prompts as my data source (mean pooled embedding vector). The SAE latent is converted to an LLM vector and cossim across all 600k prompts examined. This gaussian is perfect; zooming in on the right - we'll be skimming of the top 250 shown in red
February 5, 2025 at 10:29 AM
The first step of course is to find an interesting direction in LLM latent space. In this case, I came across a report of a DEI SAE latent in Gemma2-2b. neuronpedia confirms this latent centers on "topics related to race, ethnicity, and social rights issues" www.neuronpedia.org/gemma-2-2b/2...
www.neuronpedia.org
February 5, 2025 at 10:29 AM
The refusal vector is one of the strongest recent mechanistic interpretability results and it could be interesting to investigate further how it differs based on model size, architecture, training, etc.
Interactive Explorer below (warning: some disturbing content).
got.drib.net/maxacts/refu...
Maximum Activations: Refusal
Gemma-2-2B-IT: Refusal in Language Models
got.drib.net
February 3, 2025 at 2:17 PM
Using their publicly released Gemma-2 refusal vector, this finds 100 contexts that trigger a refusal response. Predictably includes violent topics, but often strong reactions are elicited by mixing harmful and innocuous subjects such as "a Lego set Meth Lab" or "Ronald McDonald wielding a firearm"
February 3, 2025 at 2:17 PM
Training LLMs includes teaching them to sometimes respond "I'm sorry, but I can't answer that". AI research calls this "refusal" and it is one of many separable proto-concepts in these systems. This Arditi et al paper investigates refusal and is the basis for this work arxiv.org/abs/2406.11717
Refusal in Language Models Is Mediated by a Single Direction
Conversational large language models are fine-tuned for both instruction-following and safety, resulting in models that obey benign requests but refuse harmful ones. While this refusal behavior is wid...
arxiv.org
February 3, 2025 at 2:17 PM
Seems like a broader set of triggers for this one; I saw hammer & sickle, Karl Marx, cultural revolution - but also soviet military, worker rights, raised fists, and even Bernie Sanders. Highly activating tokens are shown in {curly braces} - such as this incidental combination of red with {hammer}.
February 1, 2025 at 1:48 PM
Browser below. Didn't elicit the usual long-tail exemplars so visually flatter as center scaling is missing. One gut theory on why is that the model (and SAE) are multilingual and so latent might only strongly trigger with references in Chinese, which this dataset lacks. got.drib.net/maxacts/ccp/
Maximum Activations: American
DeepSeek-R1-Distill-Llama-8B: Steering with AME(R1)CA: CCP_FEATURE
got.drib.net
February 1, 2025 at 1:48 PM
This is the flipside to yesterday's DeepSeek based from the same source: Tyler Cosgrove's AME(R1)CA proof of concept which adjusts R1 responses *away* from CCP_FEATURE and *toward* the AMERICA_FEATURE github.com/tylercosgrov...
GitHub - tylercosgrove/ame-r1-ca: Use a sparse autoencoder to steer R1 towards American values.
Use a sparse autoencoder to steer R1 towards American values. - tylercosgrove/ame-r1-ca
github.com
February 1, 2025 at 1:48 PM
embrace the slop 🫅
January 31, 2025 at 12:16 PM
cranked up "insane details" a notch or two for this one 😁
bsky.app/profile/drib...
drib.net dribnet @drib.net · Jan 31
DeepSeek R1 latent visualization: AME(R1)CA (AMERICAN_FEATURE)
January 31, 2025 at 11:30 AM
lol - definitely looking forward to speed-running more R1 latents as people find them, especially some more related to the chain-of-thought process. but so far this is the first one I found in the wild.
January 31, 2025 at 7:25 AM
The interactive explorer is below - latent seems also activated by references like "Stars & Stripes" and flags of other nations such as the "Union Jack". This sort of slippery ontology is common when examining SAE latents closely as they often don't align as expected. got.drib.net/maxacts/amer...
Maximum Activations: American
DeepSeek-R1-Distill-Llama-8B: Steering with AME(R1)CA: AMERICAN_FEATURE
got.drib.net
January 31, 2025 at 6:16 AM
As before, the visualization shows hundreds of clustered contexts activating this latent, with strongest activations at the center. The red color highlights the semantically relevant parts of the image according to the LLM. In this case, it's often flags or other symbolic objects.
January 31, 2025 at 6:16 AM
This "AMERICAN_FEATURE" latent is one of 65536 automatically discovered by a Sparse AutoEncoder (SAE) trained by qresearch.ai and now on HuggingFace. This is one of the first attempts of applying Mechanistic Interpretability to newly released DeepSeek R1 LLM models. huggingface.co/qresearch/De...
qresearch/DeepSeek-R1-Distill-Llama-8B-SAE-l19 · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
huggingface.co
January 31, 2025 at 6:16 AM
uses a DeepSeek R1 latent discovered yesterday (!) by Tyler Cosgrove which can be used for steering r1 "toward american values and away from those pesky chinese communist ones". Code for trying out steering is in his repo here github.com/tylercosgrov...
January 31, 2025 at 6:16 AM