Lucas Dixon
iislucas.bsky.social
Lucas Dixon
@iislucas.bsky.social
Machine learning, interpretability, visualization, Language Models, People+AI research
New open-source AI assited reading experience we built for arxiv papers: lumi.withgoogle.com
Lumi: A reading prototype by Google PAIR
Explore research papers with AI features including annotations, granular summaries, and custom Q&A. Prototype by People & AI Research (PAIR) at Google
lumi.withgoogle.com
October 1, 2025 at 3:55 PM
Reposted by Lucas Dixon
🚨 We're looking for more reviewers for the workshop!
📆 Review period: May 24-June 7

If you're passionate about making interpretability useful and want to help shape the conversation, we'd love your input.

💡🔍 Self-nominate here:
docs.google.com/forms/d/e/1F...
May 20, 2025 at 12:05 AM
Reposted by Lucas Dixon
Take a look at some initial research projects, and see if there's one you'd like to work on:
github.com/ARBORproject...
Or propose your own idea! There are many ways to contribute, and we welcome all of them.
ARBORproject arborproject.github.io · Discussions
Explore the GitHub Discussions forum for ARBORproject arborproject.github.io. Discuss code, ask questions & collaborate with the developer community.
github.com
February 20, 2025 at 7:55 PM
Reposted by Lucas Dixon
Great thread describing the new ARBOR open interpretability project, which has some fascinating projects already. Take a look!
ARBOR aims to accelerate the internal investigation of the new class of AI "reasoning" models.

See the ARBOR discussion board for a thread for each project underway.

github.com/ArborProjec...
February 20, 2025 at 10:49 PM
Reposted by Lucas Dixon
Looking for a small or medium sized VLM? PaliGemma 2 spans more than 150x of compute!

Not sure yet if you want to invest the time 🪄finetuning🪄 on your data? Give it a try with our ready-to-use "mix" checkpoints:

🤗 huggingface.co/blog/paligem...
🎤 developers.googleblog.com/en/introduci...
February 19, 2025 at 5:47 PM
Reposted by Lucas Dixon
In December, I posted about our new paper on mastering board games using internal + external planning. 👇

Here's a talk now on Youtube about it given by my awesome colleague John Schultz!

www.youtube.com/watch?v=JyxE...
January 17, 2025 at 5:26 PM
Google research scholar programme applications open until 27th Jan. support early-career professors (received PhD within seven years of submission).
research.google/programs-and...
Research scholar program
Overview
research.google
December 21, 2024 at 9:47 AM
Reposted by Lucas Dixon
What's in an attention head? 🤯

We present an efficient framework – MAPS – for inferring the functionality of attention heads in LLMs ✨directly from their parameters✨

A new preprint with Amit Elhelo 🧵 (1/10)
December 18, 2024 at 5:55 PM
Reposted by Lucas Dixon
We scaled training data attribution (TDA) methods ~1000x to find influential pretraining examples for thousands of queries in an 8B-parameter LLM over the entire 160B-token C4 corpus!
medium.com/people-ai-re...
December 13, 2024 at 6:57 PM
I think this (LLMs makes making small scripts super easy) gets more profound agian when we start to make lots of tiny voice-powered apps/agents... would love to see more prototyping tools for this and play with them. Send me pointers!
December 11, 2024 at 5:49 PM
Reposted by Lucas Dixon
🚨 New Paper 🚨
Can LLMs perform latent multi-hop reasoning without exploiting shortcuts? We find the answer is yes – they can recall and compose facts not seen together in training or guessing the answer, but success greatly depends on the type of the bridge entity (80% for country, 6% for year)! 1/N
November 27, 2024 at 5:26 PM
arxiv.org/abs/2405.14838 by supervised learning curriculum of incrementally eliminating the start of a CoT they are able to train gpt2-small to do 9 digit multiplication without CoT; a fascinating and impressive result!
From Explicit CoT to Implicit CoT: Learning to Internalize CoT Step by Step
When leveraging language models for reasoning tasks, generating explicit chain-of-thought (CoT) steps often proves essential for achieving high accuracy in final outputs. In this paper, we investigate...
arxiv.org
November 29, 2024 at 9:27 AM
Reposted by Lucas Dixon
I'm learning a lot from this visually rich blog post. Also, I'm charmed by the rotating list of equal-contribution authors. Good knights-of-the-round-table energy!
dl.heeere.com/conditional-...
A Visual Dive into Conditional Flow Matching | ICLR Blogposts 2025
Conditional flow matching (CFM) was introduced by three simultaneous papers at ICLR 2023, through different approaches (conditional matching, rectifying flows and stochastic interpolants). <br/> The m...
dl.heeere.com
November 27, 2024 at 6:37 PM
Reposted by Lucas Dixon
I want to describe my experience of coding with AI, because it seems to differ from other people's expectations. Earlier this morning, I saw a beautiful image here, based on roots of polynomials: bsky.app/profile/scon...
I wanted to try this idea myself, but with animation in a Javascript context!
Roots of parametric polynomials.
Made with #python, #matplotlib, #numpy and #sympy.
November 17, 2024 at 5:01 PM
Reposted by Lucas Dixon
It's so beautiful to see this kind of fluid interaction with huge data!
Mosaic v0.12 is out: database-powered scalable, interactive visualization! 📈 One new addition is support for dynamic changes in the backing data. Move between smaller and larger samples to balance speed and comprehensive coverage.
November 21, 2024 at 6:07 PM
Reposted by Lucas Dixon
The Gini coefficient is the standard way to measure inequality, but what does it mean, concretely? I made a little visualization to build intuition:
www.bewitched.com/demo/gini
November 23, 2024 at 3:31 PM
Reposted by Lucas Dixon
A meditative toy, visualizing Jupiter's Galilean moons:
www.bewitched.com/demo/jupiter/
Galilean Moons Timeline
A meditative visualization of Jupiter's Galilean moons
www.bewitched.com
November 24, 2024 at 3:25 PM
People can now apply for Student researcher roles (basically a kind of internship) at Google/Deep Mind (until Dec 13)
www.google.com/about/career...
Student Researcher, 2025 — Google Careers
www.google.com
November 20, 2024 at 9:03 PM