Oliver Daniels
oadaniels.bsky.social
Oliver Daniels
@oadaniels.bsky.social
CS PhD student at UMass Amherst, AI safety stuff
Reposted by Oliver Daniels
Reasoning is about variable binding. It’s not about information retrieval. If a model cannot do variable binding, it is not good at grounded reasoning, and there’s evidence accruing that large scale can make LLMs worse at in-context grounded reasoning. 🧵
June 12, 2025 at 5:12 PM
Most exciting alignment research since...steering vectors?
@turntrout.bsky.social is maybe the most underrated alignment researcher (and he's pretty highly rated!)
1) AIs are trained as black boxes, making it hard to understand or control their behavior. This is bad for safety! But what is an alternative? Our idea: train structure into a neural network by configuring which components update on different tasks. We call it "gradient routing."
December 8, 2024 at 6:31 PM
Reposted by Oliver Daniels
To help create jobs for blue collar men, I would simply legalize infill housing construction in places where demand is high.
December 3, 2024 at 10:06 PM
Reposted by Oliver Daniels
I really like the work of LessWrong and they are currently fundraising.

Rationality has been quick on AI, crypto, covid, the replication crisis. Seems probably very valuable.

https://www.lesswrong.com/posts/5n2ZQcbc7r4R8mvqc/the-lightcone-is-nothing-without-its-people-lw-lighthaven-s-5
(The) Lightcone is nothing without its people: LW + Lighthaven's big fundraiser — LessWrong
TLDR: LessWrong + Lighthaven need about $3M for the next 12 months. Donate here, or send me an email, DM or signal message (+1 510 944 3235), or comm…
www.lesswrong.com
November 30, 2024 at 5:28 AM