john44234.bsky.social
john44234.bsky.social
@john44234.bsky.social
Reposted by john44234.bsky.social
I have been charged in a federal indictment sought by the Department of Justice.

This political prosecution is an attack on all of our First Amendment rights. I’m not backing down, and we’re going to win.
October 29, 2025 at 4:55 PM
Reposted by john44234.bsky.social
look at this shit!
October 18, 2025 at 11:39 PM
Reposted by john44234.bsky.social
i am going to try to give a framework of my own understanding which laypeople can understand.
yeah - I was impressed by the token-prediction as being as powerful as it is, but there's more going on than that and I don't really follow it any more.
October 13, 2025 at 6:36 PM
Reposted by john44234.bsky.social
September 7, 2025 at 7:57 PM
Reposted by john44234.bsky.social
i think "how much and what information is going in and out" is important but where your thinky bits are relative to your body or whether the body really exists aren't
August 29, 2025 at 4:09 AM
Reposted by john44234.bsky.social
I've edited this and now think it's the very best thing I've done on AI and the environment, by a wide margin andymasley.substack.com/p/i-cant-fin...
Data centers don't raise household water bills at all, anywhere
A lot of journalism on AI's water impacts is misleading
andymasley.substack.com
August 27, 2025 at 5:16 PM
Reposted by john44234.bsky.social
"AI 2027" argues that AI will reach roughly human level in roughly 2027. This just happens to be right about when OpenAI would expect to start to run out of money.

I argue that this is not a coincidence at all, and its predictions are all wrong.

www.verysane.ai/p/agi-probab...
AGI: Probably Not 2027
AI 2027 is a web site that might be described as a paper, manifesto or thesis.
www.verysane.ai
August 12, 2025 at 2:13 PM
Reposted by john44234.bsky.social
because whatever is different about human cognition -- whatever separates is from other apes -- seems to relate to the very strong frontal suppression of what are naively very useful instincts. in other words, intelligence seems to be a form of exaptive generalization from specialized capabilities.
August 8, 2025 at 5:31 AM
Reposted by john44234.bsky.social
Every time you share the “3 Bs in blueberry thing” and no one explains its relation to tokenizers, you help everyone get just a tiny bit less informed
August 8, 2025 at 5:39 AM
Reposted by john44234.bsky.social
Total Theory of Vibe cultural victory alignment.anthropic.com/2025/sublimi...
Subliminal Learning: Language Models Transmit Behavioral Traits via Hidden Signals in Data
alignment.anthropic.com
July 22, 2025 at 5:07 PM
Reposted by john44234.bsky.social
specifically, i think that what people describe as "qualia" is coactivation of naively unrelated concepts due to their similarity when projected down to a low-dimensional representation
July 22, 2025 at 5:16 PM
Reposted by john44234.bsky.social
Okay so the neat thing here is that with the way ChatGPT’s memory works, it probably really is ideal for creating psychotic breaks because it lets you ideate anything in a reproducible manner.
unclear if crazy, stupid or on drugs
July 18, 2025 at 1:52 AM
Reposted by john44234.bsky.social
I don’t think anyone is prepared for what they just did w/ ICE.

This is not a simple budget increase. It is an explosion - making ICE bigger than the FBI, US Bureau of Prisons, DEA,& others combined.

It is setting up to make what’s happening now look like child’s play. And people are disappearing.
July 3, 2025 at 6:58 PM
Reposted by john44234.bsky.social
If that’s what they do to a United States Senator with a question, imagine what they can do to any American that dares to speak up. We will hold this administration accountable.
June 12, 2025 at 9:15 PM
Reposted by john44234.bsky.social
hunyuan video (the premier off-the-shelf video diffusion model) comes with a suite of IPadapters which allow you to swap in consistent characters and bind action in the scene to existing video in a way which controls the morphing and clipping which is right now diagnostic of most video models.
May 19, 2025 at 7:15 PM
Reposted by john44234.bsky.social
i think that the answer to "is it intelligence" is neither "no" nor "yes." if you have the background to read it, the Transformer Circuits series of papers by Anthropic are some of the most interesting things out there.
transformer-circuits.pub
Transformer Circuits Thread
Can we reverse engineer transformer language models into human-understandable computer programs?
transformer-circuits.pub
May 7, 2025 at 6:34 PM
Reposted by john44234.bsky.social
anyway, if you were wondering why David Shor keeps arguing that Republicans will never pay an electoral penalty for racism under any circumstances, this is why
March 22, 2025 at 6:20 PM
Reposted by john44234.bsky.social
on the puritan left because i don’t want to discourse with a guy who thinks i’m subhuman
March 22, 2025 at 7:30 PM
Reposted by john44234.bsky.social
Language Models Use Trigonometry to Do Addition

They discover numbers are represented in these LLMs as a generalized helix, which is strongly causally implicated for the tasks of addition and subtraction, and is also causally relevant for integer division, multiplication, and modular arithmetic.
February 4, 2025 at 8:53 AM
Reposted by john44234.bsky.social
By the way you can catch my talk for the 38th Chaos Communication Congress last December here media.ccc.de/v/38c3-feeli...
Feelings of Structure in Life, Art, and Neural Nets
One of the basic ways we navigate the world is through ‘feelings of structure’ -- our experience of the inner logic of a system or a situ...
media.ccc.de
February 3, 2025 at 3:52 AM
Reposted by john44234.bsky.social
Today, we are publishing the first-ever International AI Safety Report, backed by 30 countries and the OECD, UN, and EU.

It summarises the state of the science on AI capabilities and risks, and how to mitigate those risks. 🧵

Full Report: assets.publishing.service.gov.uk/media/679a0c...

1/21
January 29, 2025 at 1:50 PM
Reposted by john44234.bsky.social
One of my grand interpretability goals is to improve human scientific understanding by analyzing scientific discovery models, but this is the most convincing case yet that we CAN learn from model interpretation: Chess grandmasters learned new play concepts from AlphaZero's internal representations.
Bridging the Human-AI Knowledge Gap: Concept Discovery and Transfer in AlphaZero
Artificial Intelligence (AI) systems have made remarkable progress, attaining super-human performance across various domains. This presents us with an opportunity to further human knowledge and improv...
arxiv.org
January 27, 2025 at 9:43 PM
Reposted by john44234.bsky.social
llms are a neat trick that lets you train sequence to sequence while executing auto-generatively. diffusion models are a neat trick to learn a distribution to distribution map.

the loudest ai skeptics are not at all interested in why these models work so well despite their simplicity
November 14, 2024 at 6:30 AM
Reposted by john44234.bsky.social
This paper is now officially out, open access here journals.sagepub.com/doi/10.1177/...
February 14, 2024 at 6:50 PM
Reposted by john44234.bsky.social
I am a broken record on this but LLM text embeddings are an incredible breakthrough, and the ability for anyone to build pretty good classifiers with structured output could be insanely useful.

Trying to build NLP interfaces is taking my team an extremely long time and is extremely brittle
November 13, 2024 at 3:49 PM