Charley Johnson
charleyjohnson.bsky.social
Charley Johnson
@charleyjohnson.bsky.social
Give it a read / listen and sign up to Untangled while you're at it - untangled.substack.com/p/what-if-we...
What If We Regulated Chatbots Like Any Other Product?
Listen now | My conversation with Ben Winters, Director of AI and Privacy at Consumer Federation of America
untangled.substack.com
February 8, 2026 at 4:48 PM
But more than any specific recommendation, the Bill serves as a reminder of the kind of world we could live in. It articulates an alternative future that we could inhabit. And here’s the good news: we know how to get there and state legislators are increasingly receptive.
February 8, 2026 at 4:48 PM
Data minimization over consent: Instead of relying on checkbox fatigue, the bill prohibits using personal data from outside chatbot interactions.
Private right of action: Harmed individuals can sue directly, not just rely on overwhelmed state attorneys general.
February 8, 2026 at 4:48 PM
In our conversation, Ben and I dug into the key provisions in the Bill, including:

Product liability: The bill leverages centuries of product liability law to hold companies accountable for design choices, rather than treating chatbots as neutral tools.
February 8, 2026 at 4:48 PM
Yet, as @benwinters.bsky.social points out in our conversation, every aspect of a chatbot—from training data to interface design to what responses get blocked—represents a series of choices by companies. When those choices foreseeably lead to harm, companies should be held accountable.
February 8, 2026 at 4:48 PM
Tech companies have successfully made chatbots seem like mystical, uncontrollable entities while simultaneously claiming they can be trusted without regulation.
February 8, 2026 at 4:48 PM
Today, I’m sharing my conversation with @benwinters.bsky.social, Director of AI and Privacy at @consumerfed.bsky.social about The People First Chatbot Bill—model legislation for regulating chatbots that’s been endorsed by over 70 organizations.
February 8, 2026 at 4:48 PM
We're in a moment that desperately needs imagination and curiosity.
Remember: this is a capability humans have that machines will never possess.

Read the full piece - untangled.substack.com/p/the-intell...
The Intelligence of a Hunch
What AI Will Never Have
untangled.substack.com
February 5, 2026 at 5:25 PM
This capacity for imaginative leaps — for seeing new possibilities beyond what the data shows — is what makes us human.

It's also what allows us to recognize when our entire framework needs replacement, when the categories we're using are part of the problem.
February 5, 2026 at 5:25 PM
Abduction is the power of a hunch, a gut instinct, seeing a wet street and making a contextual guess (not just concluding "it must be raining because rain makes streets wet"), as Erik Larson reminds us.
February 5, 2026 at 5:25 PM
This is why AI systems break when they encounter:
→ Novel situations
→ Exceptions to patterns
→ Unlikely events (the "long tail problem")

What's missing?
Abduction — the reasoning that moves from observation to hypothesis. The detective work of seeing clues and generating explanations.
February 5, 2026 at 5:25 PM
But! World models fall into the same trap as LLMs. They're both doing induction — observing patterns in past data to predict the future. And induction has a fatal flaw: it assumes the future will resemble the past. The sun rose yesterday and today, but that doesn't guarantee tomorrow.
February 5, 2026 at 5:25 PM
His solution? World models trained on video games and robotics data -- on the assumption that intelligence emerges from interacting with an environment.
February 5, 2026 at 5:25 PM
World models won't save AI from its fundamental limitation

Google DeepMind's Demis Hassabis recently said LLMs "just predict the next token based on statistical correlations" and "don't really know why A leads to B." Glad we solved that mystery!
February 5, 2026 at 5:25 PM
New post out this Sunday on the myth of the crowd, and why we’re all speculating on uncertainty. Subscribe to Untangled today to get it in your inbox. untangled.substack.com
Untangled with Charley Johnson | Substack
A newsletter and podcast about our sociotechnical world, and how to change it. Click to read Untangled with Charley Johnson, a Substack publication with thousands of subscribers.
untangled.substack.com
July 25, 2025 at 4:46 PM
This isn't a democratic market of independent thinkers. It's a hierarchical system where a small elite signals, and everyone else reacts.

The result? Accuracy that looks like crowd wisdom—but is really just a reflection of power.
July 25, 2025 at 4:46 PM
A study of 500 Polymarket contracts found that information doesn’t flow evenly. It cascades—from elite traders down through the system in predictable
sequences.

– High-frequency traders move first
– Active traders follow
– Retail traders trail behind
July 25, 2025 at 4:46 PM
The "Wisdom of Crowds" is a lie.

Prediction markets claim to reflect the wisdom of the crowd.

But new research shows they actually reflect something else: power.
July 25, 2025 at 4:46 PM
Thanks so much for the shout out @newpublic.org !
@charleyjohnson.bsky.social’s ‬course on July 19-20 is a great resource for folks working toward true systems change.

It comes highly recommended by our Community Engagement Manager Hays Witt, and leans on many of the multidisciplinary tools and principles we use in our work.
Systems Change for Tech & Society Leaders
Learn how to shift power imbalances holding you and your system back.
untangled.substack.com
July 11, 2025 at 12:15 AM
Reposted by Charley Johnson
@charleyjohnson.bsky.social’s ‬course on July 19-20 is a great resource for folks working toward true systems change.

It comes highly recommended by our Community Engagement Manager Hays Witt, and leans on many of the multidisciplinary tools and principles we use in our work.
Systems Change for Tech & Society Leaders
Learn how to shift power imbalances holding you and your system back.
untangled.substack.com
July 10, 2025 at 1:35 PM
How to reclaim our agency in an age of AI.
June 27, 2025 at 2:03 PM
Alternative visions of AI that center consent, community ownership, context, and don’t come at the expense of people’s livelihoods, public health, and the environment.
June 27, 2025 at 2:03 PM
Boomers, doomers, and the religion of AGI.
June 27, 2025 at 2:03 PM
How the companies pursuing this approach represent a modern-day empire, and the role narrative power plays in sustaining it.
June 27, 2025 at 2:03 PM