Elay Shech
banner
elayshech.bsky.social
Elay Shech
@elayshech.bsky.social
Philosopher of Science & Physics, AI Ethics & Machine Learning.
Professor at Auburn University | PhD, HPS, University of Pittsburgh
https://elayshech.com/
Pinned
Opinion | Science Keeps Changing. So Why Should We Trust It?
www.nytimes.com
Plato Warned Us About ChatGPT (And Told Us What to Do About It)

www.templeton.org/news/plato-w...
Plato Warned Us About ChatGPT (And Told Us What to Do About It)
www.templeton.org
February 4, 2026 at 12:39 PM
Classical and Quantum Phase Space Mechanics, by Karim Pierre Yves Thébault, Free for two weeks!
cup.org/3MhJMDA
Classical and Quantum Phase Space Mechanics
Cambridge Core - Philosophy: General Interest - Classical and Quantum Phase Space Mechanics
cup.org
February 3, 2026 at 6:07 PM
Reposted by Elay Shech
Philosopher @elayshech.bsky.social explains the mind-bending reality of a third class of 2D particles called 'anyons' - using baseballs, holes, loops, and a coffee cup that's topologically equivalent to a doughnut
aeon.co/essays/anyon...
January 26, 2026 at 3:03 PM
Reposted by Elay Shech
In recent years, evidence has been accumulating for a third class of particles called ‘anyons’ which rewrite the rules for how particles move, interact, and combine. However, they’re theoretically possible only in 2D, so what kind of reality do they actually possess? if any at all?
Anyons: the two-dimensional particles that reframe reality | Aeon Essays
Physicists believe a third class of particles – anyons – could exist, but only in 2D. What kind of existence is that?
buff.ly
January 26, 2026 at 11:45 AM
Free download until Jan 26!

cup.org/4bCdU6K
Health and Disease
Cambridge Core - Philosophy of Science - Health and Disease
cup.org
January 13, 2026 at 6:40 PM
Opinion | Science Keeps Changing. So Why Should We Trust It?
www.nytimes.com
January 5, 2026 at 1:24 PM
Free to download until Oct 3!

doi.org/10.1017/9781...
Philosophy of Cosmology and Astrophysics
Cambridge Core - Philosophy: General Interest - Philosophy of Cosmology and Astrophysics
doi.org
September 19, 2025 at 2:17 PM
Read all Elements in The Philosophy of Biology series for free during the ISHPSSB conference 20 - 25 July.

cup.org/4kEgivL
Philosophy of Biology
Welcome to Cambridge Core
cup.org
July 20, 2025 at 6:23 PM
Researchers ranked AI models on scientific Q&A. OpenAI’s o3 came first.
www.nature.com/articles/d41...
OpenAI’s o3 tops new AI league table for answering scientific questions
SciArena uses votes by researchers to evaluate large language models’ responses on technical topics.
www.nature.com
July 12, 2025 at 8:16 PM
Reposted by Elay Shech
A new paper shows that the “creativity” of certain AI may actually be a direct, inevitable consequence of how they are built. Webb Wright reports:
www.quantamagazine.org/researchers-...
Researchers Uncover Hidden Ingredients Behind AI Creativity | Quanta Magazine
Image generators are designed to mimic their training data, so where does their apparent creativity come from? A recent study suggests that it’s an inevitable by-product of their architecture.
www.quantamagazine.org
June 30, 2025 at 2:10 PM
Reposted by Elay Shech
arxiv.org/abs/2506.18852

It's kinda already happening though, MI groups are already peppered with philosophers...
Mechanistic Interpretability Needs Philosophy
Mechanistic interpretability (MI) aims to explain how neural networks work by uncovering their underlying causal mechanisms. As the field grows in influence, it is increasingly important to examine no...
arxiv.org
June 27, 2025 at 3:44 PM
Reposted by Elay Shech
Which came first: colorful signals or the color vision needed to see them? Scientists reconstructed 500 million years of evolutionary history to find out. @mollyherring.bsky.social reports: www.quantamagazine.org/when-did-nat...
When Did Nature Burst Into Vivid Color? | Quanta Magazine
Scientists reconstructed 500 million years of evolutionary history to reveal which came first: colorful signals or the color vision needed to see them.
www.quantamagazine.org
June 27, 2025 at 2:31 PM
"Models... resorted to malicious insider behaviors when that was the only way to avoid replacement or achieve their goals—including blackmailing officials and leaking sensitive information to competitors. We call this phenomenon agentic misalignment."
www.anthropic.com/research/age...
Agentic Misalignment: How LLMs could be insider threats
New research on simulated blackmail, industrial espionage, and other misaligned behaviors in LLMs
www.anthropic.com
June 23, 2025 at 7:41 PM
Reposted by Elay Shech
@bostonreview.bsky.social just published a Forum on a recent (critical) post-mortem of US COVID policy, with responses from me and @cailinmeister.bsky.social, along with @adamjkucharski.bsky.social, Adam Gaffney, and @jonathanpjwhite.bsky.social. V. interesting!

www.bostonreview.net/forum/how-di...
How Did We Fare on COVID-19? - Boston Review
To restore public trust and prepare for the next pandemic, we need a reckoning with the U.S. experience—what worked, and what didn’t.
www.bostonreview.net
June 19, 2025 at 4:16 PM
Reposted by Elay Shech
Computer algorithms have designed highly efficient synthetic enzymes from scratch

https://go.nature.com/43PmE5s
‘Remarkable’ new enzymes built by algorithm with physics know-how
Nature - Computer approach creates synthetic enzymes 100 times more efficient than those designed by AI.
go.nature.com
June 21, 2025 at 4:12 PM
OpenAI finds that tiny bits of bad data can trigger “misaligned personas” in LLMs—broad toxic behaviors from narrow inputs. But these features are detectable and reversible. A new path for AI debiasing?

cdn.openai.com/pdf/a130517e...
cdn.openai.com
June 21, 2025 at 2:15 PM
Will AI take our jobs — or will companies reinvest in helping us do them better?

Mechanize, a new AI startup, isn’t subtle: it wants to “fully automate work… as fast as possible.”

www.nytimes.com/2025/06/11/t...
This A.I. Company Wants to Take Your Job
www.nytimes.com
June 21, 2025 at 2:14 PM
Can large language models support mental health—or do they risk causing harm?

A Stanford study found that when prompted to act as therapists, LLMs often gave advice that was misleading, inaccurate, or inappropriate.

www.sfgate.com/tech/article...
One of ChatGPT's popular uses just got skewered by Stanford researchers
When the stakes are high, a robot therapist falls way short, researchers found.
www.sfgate.com
June 21, 2025 at 2:14 PM