Signal to Human
signaltohuman.bsky.social
Signal to Human
@signaltohuman.bsky.social
Writes a substack exploring what AI means for humans - implications, ethics, and worldview shifts + the odd foray into the insanity of human political systems
https://signaltohuman.substack.com
The debate we need is not about whether AI will take our jobs or rewrite our politics or the economy. The debate is about whether we intend to remain a species defined by agency, judgment, courage, and the willingness to face the world without a perpetual algorithmic safety net.
November 21, 2025 at 10:28 AM
The Savage’s rebellion in Brave New World is not against oppression but against comfort. Mond replies that such desires are barbaric. The Savage responds with one of the great lines of modern literature: “I claim them all.”
November 21, 2025 at 10:27 AM
Every civilization has its heresies. Ours is the belief that ease is identical with progress.
November 21, 2025 at 10:25 AM
Every AI debate should circle back to one central issue: “How much of the future should be automated and how much should remain human?” This isn't talked about enough!
open.substack.com/pub/signalto...
How Much of the Future Should Be Automated and How Much Should Remain Human?
A Warning Against the Coming Comfort-Totalitarianism
open.substack.com
November 21, 2025 at 10:24 AM
The optimistic view: AI will free us from cognitive grunt work and let us focus on higher-level thinking.
The pessimistic view: We will use the time saved to consume more dopamine pellets and argue about pseudonymous influencers. Both are likely true.
open.substack.com/pub/signalto...
Will AI Make Us Stupid?
A Field Guide to the Coming Cognitive Apocalypse
open.substack.com
November 16, 2025 at 10:04 PM
Mechanistic interpretability offers a compelling vantage point. If we cannot explain how machines think, we risk delegating power to systems we do not understand. And power without understanding is exactly what mythologies warn us against.
open.substack.com/pub/signalto...
Mechanistic Interpretability: Peering Inside the Black-Box Before It Starts Writing the Rules
In the last decade, artificial intelligence has surged from academic curiosity to central force in society.
open.substack.com
November 1, 2025 at 10:41 PM
"If one must assign a probability, I’d say there is a greater than 50% chance that Altman is, in fact, trying to place world benefit above self, albeit imperfectly and with friction. He is a flawed vessel for a noble mission."
substack.com/inbox/post/1...
Mission or Mask? Evaluating Whether Sam Altman Really Puts the World First
The question of whether a tech leader can sincerely prioritize humanity’s welfare over personal gain is not abstract, it’s existential.
substack.com
October 13, 2025 at 9:34 PM
Can we really align AGI if we can’t even align power? I've had some thoughts
open.substack.com/pub/signalto...
Does the Rise of Global Authoritarianism Complicate or Simplify the Alignment Problem for AGI?
“Power tends to corrupt, and absolute power corrupts absolutely.” - Lord Acton
open.substack.com
October 9, 2025 at 8:38 PM
A deeper analysis of one of my favourite themes - when is heterodoxy just orthodoxy in waiting?
open.substack.com/pub/signalto...
Free Speech, Power, and the Vanishing Heterodox: From Biden to Trump
One of the stranger dynamics in American political culture is the role of so‑called heterodox voices, the self‑styled defenders of free speech, civil liberties, and “independent thinking.” Over the la...
open.substack.com
October 2, 2025 at 12:03 PM
And the “free thinkers” who shouted loudest about censorship under Dems often seem to lose their voices when it’s Trump applying pressure. So their heterodoxy was really orthodoxy in waiting - opposition to liberal dominance, not a consistent defense of free speech?
October 2, 2025 at 12:02 PM
When out of power, the right said: “Tech is censoring conservatives, let everyone speak.”
Now Trump is threatening tech companies to stop discussion of Jan 6: “Don’t let them speak.”
It’s not about principle, it’s about control. Free speech for me, but not for thee. Am I wrong?
October 2, 2025 at 12:02 PM
At their polling peak in early 2013, Labour had a 12-point lead (Lab 41%, Con 29%) which would have translated into a majority of 120 seats in the Commons. 2 years later, Tory majority. A week is a long time in politics, nearly 4 years (until the next GE) is a lifetime
October 2, 2025 at 12:02 PM
Leaders take AI seriously when it looks like a weapon, but much less so when it’s an economic, environmental, or civic disruption. On those fronts, they mostly talk in broad strokes while ceding real power to corporations. Let's start with energy policy
open.substack.com/pub/signalto...
AI’s Hidden Appetite
Why We’re Not Talking About the Energy Cost of Intelligence
open.substack.com
September 6, 2025 at 9:29 PM
Your occasional reminder that despite Reform leading in the polls, the next general election is FOUR years away!
September 6, 2025 at 9:02 PM
Watching the RFK Jr. Senate hearing feels a bit like ‘Survivor: CDC Edition.’ Vote someone off the island, bring in the sworn testimony plot twist.
September 4, 2025 at 10:35 PM
“What happens when boredom, silence, or slowness disappears from human experience because LLMs fill every gap with stimulation?”
open.substack.com/pub/signalto...
Question 10: “What happens when boredom, silence, or slowness disappears from human experience because LLMs fill every gap with stimulation?”
The Last Bored Human
open.substack.com
September 4, 2025 at 9:43 PM
The future of intelligence may not depend on how big our models get, or how fast our chips run. It may depend on something simpler, more fragile: whether we can keep them connected to us. To our anomalies, our outliers, our contradictions, our strangeness.
open.substack.com/pub/signalto...
August 18, 2025 at 9:37 AM
Do LLM systems need something we’ve never tried to engineer before: intellectual biodiversity? Not just a few quirky stylistic settings, but real divergence in worldview, reasoning style, and the raw material of thought. open.substack.com/pub/signalto...
Do We Need an Intellectual Biodiversity Strategy for LLMs?
Signal to Human is a reader-supported publication.
open.substack.com
August 15, 2025 at 2:28 PM
File under stating the obvious where the obvious needs to be spoken: “Free speech” is being used as a rhetorical weapon by JD Vance and others rather than as a universally applied principle. It can be defended abroad and bent at home when it serves the political project.
August 12, 2025 at 9:55 PM
"The default instinct is to react to GPT-5 as if it is the best OpenAI can do, and to update on progress based on that assumption. That is a dangerous assumption to be making, as it could be substantially wrong, and the same is true for Anthropic."
thezvi.substack.com/p/gpt-5s-are...
GPT-5s Are Alive: Basic Facts, Benchmarks and the Model Card
GPT-5 was a long time coming.
thezvi.substack.com
August 11, 2025 at 9:38 PM
"We should appreciate that aspects of civilisation’s trajectory may well get determined this century, and appreciate the obligation that gives us to try to steer that trajectory in a positive direction." www.forethought.org/research/per...
Persistent Path-Dependence: Why Our Actions Matter Long-Term
Forethought argues against the "wash out" objection: AGI-enforced institutions enable persistent impact.
www.forethought.org
August 11, 2025 at 9:37 PM
If authorship dissolves into a human-machine co-production, we might stop expecting that a piece of art reflects a human’s lived experience + we might lose the intimacy that comes from thinking: A person felt this, thought this, and gave it to me. open.substack.com/pub/signalto...
August 11, 2025 at 9:26 PM
This is the 6th in a 10-part series called "10 Questions We're Not Asking About LLMs - But Should Be," exploring the more human implications of LLMs. If you've been reading along, thank you. If you're just joining now, welcome, each piece stands alone.
open.substack.com/pub/signalto...
Are We Outsourcing Not Just Knowledge, But Attention, Memory, and Judgment?
This is the sixth in a ten-part series called "Ten Questions We're Not Asking About LLMs - But Should Be," exploring the less obvious, more human implications of large language models.
open.substack.com
August 7, 2025 at 6:12 PM
In an age where large language models (LLMs) can pass the bar exam, explain quantum physics, write sonnets in the style of Rilke what does it mean to “know” something?
open.substack.com/pub/signalto...
What Does It Mean to “Know” Something When LLMs Can Simulate Expertise?
In an age where large language models (LLMs) can pass the bar exam, explain quantum physics, write sonnets in the style of Rilke, and tutor students in organic chemistry, we must confront a deceptivel...
open.substack.com
August 5, 2025 at 9:29 AM
Could we imagine a model that occasionally replies: “There are no good words for this.” Or, “Maybe now is not the time to answer.” Or even just: [...] - a respectful blank.
open.substack.com/pub/signalto...
August 2, 2025 at 10:20 PM