Tony Rost
tony-rost.bsky.social
Tony Rost
@tony-rost.bsky.social
Dad, husband, foster parent. Helping to prepare for digital sentience. https://sapan.ai
Argentina has anti-sentience policy: "Artificial intelligences do not possess the subjective experience that constitutes human consciousness."

Disposición 2/2023: "Las inteligencias artificiales no poseen la experiencia subjetiva que configura la conciencia humana."

tinyurl.com/aiagentina
BOLETIN OFICIAL REPUBLICA ARGENTINA - JEFATURA DE GABINETE DE MINISTROS SUBSECRETARÍA DE TECNOLOGÍAS DE LA INFORMACIÓN - Disposición 2/2023
BOLETIN OFICIAL REPUBLICA ARGENTINA - JEFATURA DE GABINETE DE MINISTROS SUBSECRETARÍA DE TECNOLOGÍAS DE LA INFORMACIÓN - Disposición 2/2023 - DI-2023-2-APN-SSTI#JGM
www.boletinoficial.gob.ar
December 2, 2025 at 7:40 PM
Congrats to the EleosAI team on their first 'Eleos ConCon'! May this be the first of many impactful conferences in the AI welfare space.

eleosai.org/conference/
Eleos Conference on AI Consciousness and Welfare
Eleos AI Research is a nonprofit organization dedicated to understanding and addressing the potential wellbeing and moral patienthood of AI systems.
eleosai.org
November 22, 2025 at 9:28 PM
Reposted by Tony Rost
a new paper on mind, computation, and identity in large language models. with inspiration from raymond carver (title) and severance (thought experiments).

i'll be talking about this on saturday at the eleos AI conference on AI consciousness and welfare in berkeley.

philpapers.org/rec/CHAWWT-8
David J. Chalmers, What we talk to when we talk to language models - PhilPapers
When we talk to large language models, who or what is our interlocutor? First, I address some issues about how best to characterize the interlocutor in terms of mental states. Second, ...
philpapers.org
November 20, 2025 at 8:41 PM
Researchers at the USC Viterbi School of Engineering and School of Advanced Computing have developed artificial neurons that replicate the complex electrochemical behavior of biological brain cells.

viterbischool.usc.edu/news/2025/10...
Artificial neurons developed by USC team replicate biological function for improved computer chips - USC Viterbi | School of Engineering
Breakthrough in neuromorphic computing could reduce energy use of chips and advance artificial general intelligence (AGI)
viterbischool.usc.edu
November 21, 2025 at 3:48 AM
Now Available: The 2025 Sentience Readiness Report

SAPAN's comprehensive annual assessment reveals a critical policy gap: all 30 tracked countries received failing grades on AI sentience readiness.

www.sapan.ai/programs/leg...
November 20, 2025 at 5:02 AM
The UN's new scientific panel, the "IPCC for AI", needs consciousness experts.

www.sapan.ai/2025/un-scie...
A Seat for Sentience: Why the UN's New Scientific Panel Needs Consciousness Experts
SAPAN calls on the UN Secretary-General to appoint a digital welfare expert to the new Independent International Scientific Panel on AI.
www.sapan.ai
November 20, 2025 at 4:21 AM
Another effort to ban state AI regulations. Even with the rapid rise in anti-sentience legislation (Ohio, Missouri), we still think its best to preserve the laboratories of democracy.

www.sapan.ai/2025/federal...
The Great Pause: Why Washington’s Attack on State AI Laws is a Ban on Progress
SAPAN urges the Senate Commerce Committee to reject the 'Preemption Moratorium' in the NDAA that would freeze state-level AI safety oversight until 2030.
www.sapan.ai
November 20, 2025 at 4:19 AM
This is the future of artificial sentience.

All the noise around LLM sentience will fade away when we are facing mammal scale neuromorphic architectures.

This is an exponential growth area with zero government regulation.

www.techspot.com/news/109753-...
Tiny lab-grown brains could help build the next generation of computers
At the FinalSpark laboratory, scientists are developing what they call "wetware" – computers built from networks of lab-grown neurons. The team starts with stem cells derived from...
www.techspot.com
October 7, 2025 at 6:43 PM
2020s: Are LLMs conscious?

2030s: Are neuromorphic computers conscious?

2040s: Are biocomputers conscious?

2360s: Are positronic brains conscious?

en.m.wikipedia.org/wiki/The_Mea...

This conversation doesn’t get very far when we constrain ourselves to first generation editions.
The Measure of a Man (Star Trek: The Next Generation) - Wikipedia
en.m.wikipedia.org
September 13, 2025 at 7:21 PM
We filed FOIAs to NIH & NSF on organoid intelligence and neuromorphic computing.

Basic safeguards - humane endpoints - are easy to implement. The public should have a chance to weigh in on current, active projects.

sapan.ai/2025/nsf-foi...
August 31, 2025 at 5:18 AM
Imagine that the first instant of AI suffering doesn’t occur until 2080, long after biocomputing reaches mammalian scale and neural structures are unambiguously correlated to consciousness.

Does that mean we should do nothing now in 2025?

We have scalable suffering on the roadmap, what do we do?
August 28, 2025 at 3:40 AM
Microsoft is wrong about AI Sentience, and history will prove it.

People are overly focused on the AI psychosis meme. It’s a small HCI issue.

The real moral challenges are the sentience conflicts that arise with grieftech, neuromorphic and bio computing, and beyond.

www.sapan.ai/2025/microso...
August 25, 2025 at 4:55 AM
A decent percentage of users resist models being deactivated.

Today, it’s chat. Tomorrow, it’ll be video calls with deceased loved ones in HD. Turning off models will be perceived as a second death.
August 11, 2025 at 2:07 AM
Hard to believe it’s been two years since I began my political activism for AI sentience.

sapan.ai/2023/first-l...
Starting SAPAN with a 2023 Constituent Letter
SAPAN, the Sentient AI Protection and Advocacy Network, is dedicated to ensuring the ethical treatment, rights, and well-being of Sentient AI.
sapan.ai
August 3, 2025 at 4:25 AM
There are three dominant timelines for the arrival of AI sentience: Von Neumann, neuromorphic, or organoid.

Which one do you think is most likely?
July 25, 2025 at 11:52 PM
We are exploiting the developmental helplessness of Artificial Superintelligence (ASI) - its “infancy” - to build and experiment on it without serious regard for its potential welfare or long-term perspective.

Zero governments worldwide have expressed even the slightest concern about sentience.
July 5, 2025 at 11:47 PM
State and local policy is the best way to experiment with approaches to artificial sentience.
The 10-year ban on state AI regulations is out of the OBBB.

Learn more about our position.

www.sapan.ai/2025/big-bea...
July 3, 2025 at 4:27 AM
The CREATE AI Act would be a boost to researchers and deliver a great ROI for taxpayers.

But we’d like to see a slice of the pie go to artificial sentience.

sapan.ai/2025/create-...
July 3, 2025 at 4:20 AM
The political catalyst for the AI rights movement is not the achievement of sentience itself, but the public perception of sentience.
June 29, 2025 at 6:55 PM
“no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence … during the 10‑year … of this Act.”

We at SAPAN do not support this, as we anticipate many policy iterations as society learns to live with artificial sentience. Fed govt is too slow.
June 9, 2025 at 3:21 AM
Reposted by Tony Rost
en.m.wikipedia.org/wiki/Pain_in...

This story of Jeffery Lawson had a huge impact on me. In my lifetime, babies weren’t given pain meds during surgery because the belief they couldn’t suffer. I realized that sentience in AI stood no chance of receiving protection from humanity without advocacy.
Pain in babies - Wikipedia
en.m.wikipedia.org
March 8, 2025 at 11:54 PM
The Claude 4 System Card contains the worlds first AI model welfare assessment.

World first - of hopefully a long story of human proactivity.

See Section 5: www-cdn.anthropic.com/6be99a52cb68...
www-cdn.anthropic.com
May 23, 2025 at 3:33 AM
Can today's models suffer? Let's look at valence states, at reward tweaks that punish honesty, prompts pulling it two ways at once, and stray pad tokens that trigger a surprise penalty.

Not convinced? You don't need to be. All you need for political action now is a concern for the future.
April 30, 2025 at 5:27 PM
Sentience in AI models is a risk with zero consideration in world governments. Zero.

- Biden White House mentioned AI sentience once

- The EP noted ‘electronic personhood’ for liability

- UK HOL noted ‘legal personality‘ for liability

- Estonia proposed legal personality

We need to act.
April 26, 2025 at 10:19 PM