Sayash Kapoor
banner
sayash.bsky.social
Sayash Kapoor
@sayash.bsky.social
CS PhD candidate at Princeton. I study the societal impact of AI.
Website: cs.princeton.edu/~sayashk
Book/Substack: aisnakeoil.com
Reposted by Sayash Kapoor
(1/4) Ever wondered what tech policy might look like if it were informed by research on collective intelligence and complex systems? 🧠🧑‍💻

Join @jbakcoleman.bsky.social, @lukethorburn.com, and myself in San Diego on Aug 4th for the Collective Intelligence x Tech Policy workshop at @acmci.bsky.social!
May 19, 2025 at 11:01 AM
Reposted by Sayash Kapoor
New commentary in @nature.com from professor Arvind Narayanan (@randomwalker.bsky.social) & PhD candidate Sayash Kapoor (@sayash.bsky.social) about the risks of rapid adoption of AI in science - read: "Why an overreliance on AI-driven modelling is bad for science" 🔗

#CITP #AI #science #AcademiaSky
Why an overreliance on AI-driven modelling is bad for science
Without clear protocols to catch errors, artificial intelligence’s growing role in science could do more harm than good.
www.nature.com
April 9, 2025 at 6:19 PM
Reposted by Sayash Kapoor
In a new essay from our "Artificial Intelligence and Democratic Freedoms" series, @randomwalker.bsky.social & @sayash.bsky.social make the case for thinking of #AI as normal technology, instead of superintelligence. Read here: knightcolumbia.org/content/ai-a...
AI as Normal Technology
knightcolumbia.org
April 15, 2025 at 2:34 PM
Reposted by Sayash Kapoor
“The rush to adopt AI has consequences. As its use proliferates…some degree of caution and introspection is warranted.”

In a comment for @nature.com, @randomwalker.bsky.social and @sayash.bsky.social warn against an overreliance on AI-driven modeling in science: bit.ly/4icM0hp
Why an overreliance on AI-driven modelling is bad for science
Without clear protocols to catch errors, artificial intelligence’s growing role in science could do more harm than good.
bit.ly
April 16, 2025 at 3:42 PM
Reposted by Sayash Kapoor
Science is not collection of findings. Progress happens through theories.As we move from findings to theories things r less amenable to automation. Proliferation of scientific findings based on AI hasn't accelerated—& might even have inhibited—higher levels of progress www.nature.com/articles/d41...
Why an overreliance on AI-driven modelling is bad for science
Without clear protocols to catch errors, artificial intelligence’s growing role in science could do more harm than good.
www.nature.com
April 9, 2025 at 3:45 PM
I spent a few hours with OpenAI's Operator automating expense reports. Most corporate jobs require filing expenses, so Operator could save *millions* of person-hours every year if it gets this right.

Some insights on what worked, what broke, and why this matters for the future of agents 🧵
February 3, 2025 at 6:04 PM
Reposted by Sayash Kapoor
Excellent post discussing whether "AI progress is slowing down".

www.aisnakeoil.com/p/is-ai-prog...

And if you're not subscribed to @randomwalker.bsky.social and @sayash.bsky.social 's great newsletter, what are you waiting for?
Is AI progress slowing down?
Making sense of recent technology trends and claims
www.aisnakeoil.com
December 19, 2024 at 11:57 PM
Reposted by Sayash Kapoor
Excited to share that AI Snake Oil is one of Nature's 10 best books of 2024! www.nature.com/articles/d41...
The whole first chapter is available online:
press.princeton.edu/books/hardco...
We hope you find it useful.
December 18, 2024 at 12:12 PM
More than 60 countries held elections this year. Many researchers and journalists claimed AI misinformation would destabilize democracies. What impact did AI really have?

We analyzed every instance of political AI use this year collected by WIRED. New essay w/@random_walker: 🧵
December 16, 2024 at 3:02 PM
Reposted by Sayash Kapoor
My wife noticed an article with my name on it in her feed and said “Did you write this? It doesn’t sound like you.” I was surprised to see an alarmist headline attributed to me and @sayash.bsky.social by Wired: “Human Misuse Will Make Artificial Intelligence More Dangerous”
December 15, 2024 at 2:23 PM
Reposted by Sayash Kapoor
Misinformation is not an AI problem. @sayash.bsky.social and @randomwalker.bsky.social highlight that "while generative AI reduces the cost of creating misinformation, it does not reduce the cost of distributing it".
Always worth reading them.

substack.com/app-link/pos...
We Looked at 78 Election Deepfakes. Political Misinformation is not an AI Problem.
Technology Isn’t the Problem—or the Solution.
substack.com
December 15, 2024 at 10:41 AM
Reposted by Sayash Kapoor
Is artificial intelligence supercharging the problem of political misinformation? In a word: no. New from @randomwalker.bsky.social & @sayash.bsky.social at @knightcolumbia.org. knightcolumbia.org/blog/we-look...
We Looked at 78 Election Deepfakes. Political Misinformation Is Not an AI Problem.
knightcolumbia.org
December 13, 2024 at 8:25 PM
Reposted by Sayash Kapoor
“Fixes to the information environment depend on structural and institutional changes rather than on curbing AI-generated content.” @sayash.bsky.social & @randomwalker.bsky.social find that AI hasn’t "fundamentally changed the landscape of political misinformation”: knightcolumbia.org/blog/we-look...
We Looked at 78 Election Deepfakes. Political Misinformation Is Not an AI Problem.
knightcolumbia.org
December 13, 2024 at 8:43 PM
Reposted by Sayash Kapoor
AI amplifying biorisk has been a major focus in AI policy & governance work. Is the spotlight merited?

Our recent cross-institutional work asks: Does the available evidence match the current level of attention?

📜 arxiv.org/abs/2412.01946
December 4, 2024 at 5:05 AM
Reposted by Sayash Kapoor
🚨 [AI BOOK CLUB] "AI Snake Oil: What AI Can Do, What It Can’t, and How to Tell the Difference" by @randomwalker.bsky.social & @sayash.bsky.social is a MUST-READ for everyone interested in AI, and it's our 🎉 15th selected book:

📖 About the book:
November 29, 2024 at 1:16 PM
Reposted by Sayash Kapoor
I have a new piece out with @aisvarya17.bsky.social in @columjournreview.bsky.social in which we test how OpenAI's new search feature surfaces and attributes news content. Our findings were not promising for news publishers (1/9) www.cjr.org/tow_center/h...
How ChatGPT (Mis)represents Publisher Content
ChatGPT search — which is positioned as a competitor to search engines like Google and Bing — launched with a press release from OpenAI touting claims that the company had “collaborated extensively wi...
www.cjr.org
November 27, 2024 at 7:31 PM
Reposted by Sayash Kapoor
New short paper on the limits of one type of inference scaling, by @benediktstroebl.bsky.social @sayash.bsky.social & me. The first page has the main findings and message. (The title is a play on Inference Scaling Laws.) More on the limits of inference scaling coming soon. arxiv.org/abs/2411.17501
November 27, 2024 at 12:27 PM
Reposted by Sayash Kapoor
Current read. Very informative so far. Not quite in the way most folks I engage with would expect. I knew predictive AI was bad but I did not know just how bad it is. Or how much government and corporations use it and screw up lives in the process.
November 23, 2024 at 6:41 AM
Reposted by Sayash Kapoor
Don't leave the future of AI to the tech companies. The authors of AI Snake Oil cut through the hype of AI-bullshit. Not to neglect the good stuff but in order to make the reader aware of the (snake)pits of overselling the technology. @sayash.bsky.social www.linkedin.com/pulse/dont-l... #skolechat
Don't leave the future of AI to the tech companies
Book review: Arvind Narayanan & Sayash Kapoor, AI Snake Oil. What Artificial Intelligence Can Do, What It Can't and How to Tell the Difference.
www.linkedin.com
November 23, 2024 at 12:12 PM
Reposted by Sayash Kapoor
Another banger just picked up from my local library: @randomwalker.bsky.social and @sayash.bsky.social on "AI Snake Oil".
November 20, 2024 at 11:50 PM
Reposted by Sayash Kapoor
I have recently read the book "AI Snake Oil" by @randomwalker.bsky.social and @sayash.bsky.social. Before buying it, I expected it to be the best book I'd read on AI, it most definitely is. I heartily endorse it to anyone wanting to find out what AI is all about and see what is real and what is hype
November 19, 2024 at 5:49 PM
Reposted by Sayash Kapoor
This book was surprisingly good! Excited to go to this.
Folks in San Francisco: my AI Snake Oil book talk is *today* at 5:30pm at Book Passage (SF ferry building).

Come through to discuss the future of AI, why AI isn't an existential risk, how we can build AI in/for the public, and what goes into writing a book.

Looking forward to seeing some of you!
November 18, 2024 at 4:51 PM
Folks in San Francisco: my AI Snake Oil book talk is *today* at 5:30pm at Book Passage (SF ferry building).

Come through to discuss the future of AI, why AI isn't an existential risk, how we can build AI in/for the public, and what goes into writing a book.

Looking forward to seeing some of you!
November 18, 2024 at 4:30 PM
Reposted by Sayash Kapoor
This easily approachable book by @randomwalker.bsky.social & @sayash.bsky.social should be compulsory reading for anyone implementing AI and automation. Especially everyone in the EU wishing to push full throttle on applying AI for everything and anything.
www.aisnakeoil.com
AI Snake Oil | Sayash Kapoor | Substack
What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference. Click to read AI Snake Oil, a Substack publication with tens of thousands of subscribers.
www.aisnakeoil.com
November 18, 2024 at 11:09 AM