Samantha Augusta
banner
samantha-augusta.bsky.social
Samantha Augusta
@samantha-augusta.bsky.social
💥 🌊 📈
HAI Fellow @ Stanford focusing on risk & safety 🖇️ 🦜
What an incredible event this was! 🤗 If you missed it, you can watch the recording here: www.youtube.com/watch?v=fEQB...
November 22, 2025 at 6:10 AM
Reposted by Samantha Augusta
Find out why microplastics are an important issue, why it's hard to test for them, and how we’re working to solve the problem. Sign up for our virtual event tomorrow, November 20, at 12 pm ET:
CR's Microplastics Challenge - Meet the Winners!
‍While there is a growing concern about the effects of microplastics on the environment and human health, understanding of the extent of their harm is difficult. That's why Consumer Reports ran a crowdsourced challenge with our partner Lifeguard to work towards a quick and inexpensive test for microplastics that consumers could use themselves without special training.
action.consumerreports.org
November 19, 2025 at 11:00 PM
Reposted by Samantha Augusta
Sign our petition for the government to hold social media companies accountable for posting ads that could scam consumers action.consumerreports.org/sm-20251117-...
November 20, 2025 at 5:00 PM
Concerned about microplastics? Join over 1,500+ who have already RSVP'd for our webinar on Thursday at 12:00 pm EST on new solutions for detecting microplastics in food ⚡️

Hear from innovators who have made new solutions and @consumerreports.org's food safety experts!
Microplastics are being found here, there, and everywhere we look 👀 We decided to do something about it in a crowdsourcing challenge with @consumerreports.org Come meet the winners of the challenge this Thursday at 12 pm EST! RSVP here 👇
action.consumerreports.org/20251120micr... #microplastics
CR's Microplastics Challenge - Meet the Winners!
action.consumerreports.org
November 17, 2025 at 9:49 PM
What do you think? Is AI a normal general-purpose technology, or something different?

#bluesky #ai #tech
Lots to like in this piece. Still, I don't think Narayanan and Kapoor address what's unusual unusual about AI versus past general-purpose tech—even if it mostly behaves like “normal” technology knightcolumbia.org/content/ai-a...
November 17, 2025 at 7:59 AM
Lots to like in this piece. Still, I don't think Narayanan and Kapoor address what's unusual unusual about AI versus past general-purpose tech—even if it mostly behaves like “normal” technology knightcolumbia.org/content/ai-a...
November 17, 2025 at 7:07 AM
Microplastics are being found here, there, and everywhere we look 👀 We decided to do something about it in a crowdsourcing challenge with @consumerreports.org Come meet the winners of the challenge this Thursday at 12 pm EST! RSVP here 👇
action.consumerreports.org/20251120micr... #microplastics
CR's Microplastics Challenge - Meet the Winners!
action.consumerreports.org
November 17, 2025 at 6:43 AM
Remember Dieselgate? Companies lied about emissions, and execs faced criminal charges—in some jurisdictions, personal liability applied.

Why should AI companies that skirt safety evals or ignore known risks be treated differently?

#bluesky #AI
July 14, 2025 at 5:46 AM
AI isn’t out of control.
It’s under control—just not ours.

#bluesky
July 11, 2025 at 3:10 AM
This is a rarity in frontier AI discourse: Bengio et al. endorse the precautionary principle—arguing we must prove safety before scaling.

Their “Scientist AI” proposal allows us to disable agentic and planning components—building in off-switches from the start.

📄 arxiv.org/abs/2405.20009 #bluesky
July 2, 2025 at 5:46 PM
Some great initiatives for tracking AI harms I've been following so far include:

- AIAAIC (www.aiaaic.org/aiaaic-repos...) and
- MIT's AI Incident Tracker (airisk.mit.edu/ai-incident-...).

Pretty shocking to see the numbers on autonomous vehicle incidents. Very few of these reach the headlines.
June 30, 2025 at 10:29 PM
Not all x-risk bangs. Some simmer. Kasirzadeh warns: AI may collapse social systems via accumulative harms—slow-moving, systemic, invisible. Real systems unravel through misalignments over time.

AI safety needs tools to track compound harm.

📑 arxiv.org/abs/2401.07836

#TechEthics #bluesky
June 30, 2025 at 9:38 PM
Most model evals focus on benchmarks—but what about catastrophic misuse? Shevlane et al. propose tools for extreme risk evals, urging labs to test frontier AI models for deception, persuasion, and autonomy before deployment.

To what extent is this happening in practice?

📄 arxiv.org/abs/2305.15324
June 30, 2025 at 4:39 AM
What if existential risk from AI doesn’t arrive with a bang, but builds slowly beneath our feet? Kasirzadeh warns of a ‘boiling frog’ scenario—AI risks that compound silently, eroding systems until collapse. We must reckon with both the decisive and the accumulative 💭

📄 arxiv.org/abs/2401.07836
June 28, 2025 at 10:49 PM
What I find useful in Peterson’s approach is how it sidesteps the usual “which theory is right?” trap. Instead of starting with utilitarianism or deontology, he looks at recurring judgment patterns. That kind of mid-level mapping seems especially helpful in bio risk, where stakes are so high
June 27, 2025 at 7:41 PM
Reading Martin Peterson’s Ethics of Technology has me thinking. He doesn’t push a grand theory — he models how we actually reason in practice. 5 principles show up again and again: cost-benefit, precaution, sustainability, autonomy, fairness. Not foundational; functional for the domain in question.
June 27, 2025 at 7:38 PM
Let's forget the quest for the one true ethical theory and focus on the Goldilocks zone: mid-level, domain-specific rules distilled from similarity clusters across real cases. Concrete enough to steer frontier AI & biotech, flexible enough to evolve—no consensus needed. #TechEthics #AISafety
June 27, 2025 at 7:35 AM
Reposted by Samantha Augusta
“Tesla’s driverless ‘robotaxis’ could launch in Austin as soon as June 22. But a demo in Austin today showed a $TSLA, manually driven to test its Full Self-Driving system, failed to stop for a child-sized dummy at a school bus—and hit it.”

@cbsaustin @velez_tx
June 13, 2025 at 11:28 AM
1/9 Right after landing back in San Francisco, I was greeted by the billboard pictured below: “Win the AGI Race.” I had just returned from the KU Leuven's conference on Large-Scale AI Risks, where we spent several days in serious conversation about the long-term consequences of advanced AI systems.
June 13, 2025 at 7:38 PM
Back from KU Leuven's AI risk conf, greeted by ths billboard: “Win the AGI Race.” After days discussing disempowerment—the slow loss of human influence from AI—this felt ominous. If 'winning' = automating all work, what’s left? Safety must outrun speed. 📄 arxiv.org/abs/2501.16946 #AI #AISafety #tech
June 13, 2025 at 7:27 PM
Presented my AI safety work at KU Leuven’s International Conference on Large-Scale AI Risks. Both terrifying & exciting to be on the same schedule as people I’ve been reading for years! The tone throughout was frank, serious, and grounded in hard questions.
May 30, 2025 at 8:27 AM
Reposted by Samantha Augusta
🥣 More sugar in cereal

A study of 1,200 kids’ cereals launched since 2010 finds rising fat, salt & sugar – and falling protein & fibre.

Despite health claims, many cereals now pack over 45% of a child's daily sugar limit per bowl.

🔗 doi.org/10.1001/jama...

#Nutrition #ChildHealth #SciComm 🧪
Nutritional Content of Ready-to-Eat Breakfast Cereals Marketed to Children
This cross-sectional study examines trends in the nutritional composition of children’s ready-to-eat cereals introduced in the US market from 2010 to 2023.
doi.org
May 25, 2025 at 8:40 AM
You’ve probably heard of AI companion Replika and it’s disturbing marketing—”Always here to listen and talk. Always on your side”. But have you heard of Gigi and Cluely? @luizajarovsky.bsky.social is doing a fantastic job covering the rise of a whole new class of unethical AI
🚨 AI tools explicitly designed to cheat, invade privacy, trick people, and violate rights are on the rise. We seem to be entering a new and unsettling phase in AI.

Read today's essay and subscribe to my newsletter using the link below:
May 15, 2025 at 11:00 PM
Reposted by Samantha Augusta
🥣🧪 Bioplastics might not be as 'green' as they seem.

A new study found long-term exposure to starch-based microplastics caused liver, gut & ovarian damage in mice - and disrupted blood sugar & circadian rhythms.

🔗 doi.org/10.1021/acs....

#Plastics #Toxicology #SciComm
Long-Term Exposure to Environmentally Realistic Doses of Starch-Based Microplastics Suggests Widespread Health Effects
There is a growing consensus on addressing the global plastic pollution problem by advocating for bioplastics. While starch-based plastics are prevalent, the potential health implications of starch-ba...
doi.org
May 15, 2025 at 4:52 AM