Sayash Kapoor
banner
sayash.bsky.social
Sayash Kapoor
@sayash.bsky.social
CS PhD candidate at Princeton. I study the societal impact of AI.
Website: cs.princeton.edu/~sayashk
Book/Substack: aisnakeoil.com
Reposted by Sayash Kapoor
(1/4) Ever wondered what tech policy might look like if it were informed by research on collective intelligence and complex systems? 🧠🧑‍💻

Join @jbakcoleman.bsky.social, @lukethorburn.com, and myself in San Diego on Aug 4th for the Collective Intelligence x Tech Policy workshop at @acmci.bsky.social!
May 19, 2025 at 11:01 AM
Reposted by Sayash Kapoor
New commentary in @nature.com from professor Arvind Narayanan (@randomwalker.bsky.social) & PhD candidate Sayash Kapoor (@sayash.bsky.social) about the risks of rapid adoption of AI in science - read: "Why an overreliance on AI-driven modelling is bad for science" 🔗

#CITP #AI #science #AcademiaSky
Why an overreliance on AI-driven modelling is bad for science
Without clear protocols to catch errors, artificial intelligence’s growing role in science could do more harm than good.
www.nature.com
April 9, 2025 at 6:19 PM
Reposted by Sayash Kapoor
In a new essay from our "Artificial Intelligence and Democratic Freedoms" series, @randomwalker.bsky.social & @sayash.bsky.social make the case for thinking of #AI as normal technology, instead of superintelligence. Read here: knightcolumbia.org/content/ai-a...
AI as Normal Technology
knightcolumbia.org
April 15, 2025 at 2:34 PM
Reposted by Sayash Kapoor
“The rush to adopt AI has consequences. As its use proliferates…some degree of caution and introspection is warranted.”

In a comment for @nature.com, @randomwalker.bsky.social and @sayash.bsky.social warn against an overreliance on AI-driven modeling in science: bit.ly/4icM0hp
Why an overreliance on AI-driven modelling is bad for science
Without clear protocols to catch errors, artificial intelligence’s growing role in science could do more harm than good.
bit.ly
April 16, 2025 at 3:42 PM
Reposted by Sayash Kapoor
Science is not collection of findings. Progress happens through theories.As we move from findings to theories things r less amenable to automation. Proliferation of scientific findings based on AI hasn't accelerated—& might even have inhibited—higher levels of progress www.nature.com/articles/d41...
Why an overreliance on AI-driven modelling is bad for science
Without clear protocols to catch errors, artificial intelligence’s growing role in science could do more harm than good.
www.nature.com
April 9, 2025 at 3:45 PM
This is the specific use case I have in mind (Operator shouldn't be the *only* thing developers use, but rather that it can be a helpful addition to a suite of tools): x.com/random_walke...
x.com
x.com
February 3, 2025 at 6:12 PM
It is also better for end users. As
@randomwalker.bsky.social and I have argued, focusing on products (rather than just models) means companies must understand user demand and build tools people want. It leads to more applications that people can productively use: www.aisnakeoil.com/p/ai-compani...
AI companies are pivoting from creating gods to building products. Good.
Turning models into products runs into five challenges
www.aisnakeoil.com
February 3, 2025 at 6:10 PM
Finally, the new product launches from OpenAI (Operator, Search, Computer use, Deep research) show that it doesn't just want to be in the business of creating more powerful AI — it also wants a piece of the product pie. This is a smart move as models become commoditized.
February 3, 2025 at 6:10 PM
This also highlights the need for agent interoperability: who would want to teach a new agent 100s of tasks from scratch? If web agents become widespread, preventing agent lock-in will be crucial.

(I'm working on fleshing out this argument with
@sethlazar.org + Noam Kolt)
February 3, 2025 at 6:10 PM
Seen this way, Operator is a *tool* to easily create new web automation using natural language.

It could expand the web automation that businesses already use, making it easier to create new ones.

So it is quite surprising that Operator isn't available on ChatGPT Teams yet.
February 3, 2025 at 6:09 PM
Instead of thinking of Operator as a "universal assistant" that completes all tasks, it is better to think of it as a task template tool that automates specific tasks (for now).

Once a human has overseen a task a few times, we can estimate Operator's ability to automate it.
February 3, 2025 at 6:09 PM
OpenAI also allows you to "Save" tasks you completed using Operator. Once you complete a task and provide feedback to complete it successfully, you don't need to repeat it the next time.

I can imagine this becoming powerful (though it's not very detailed right now).
February 3, 2025 at 6:09 PM
3) In many cases, the challenge isn't Operator's ability to complete a task, it is eliciting human preferences. Chatbots aren't a great form factor for that.

But there are many tasks where reliability isn't important. This is where today's agents shine. For example: x.com/random_walke...
x.com
x.com
February 3, 2025 at 6:08 PM
Could more training data lead to automation without human oversight? Not quite:

1) Prompt injection remains a pitfall for web agents. Anyone who sends you an email can control your agent.
2) Low reliability means agents fail on edge cases
February 3, 2025 at 6:08 PM
But being able to see agent actions and give feedback with a human in the loop converts Operator from an unreliable agent, like the Humane Pin or Rabbit R1, to a workable but imperfect product.

Operator is as much as UX advance as it is a tech advance.
February 3, 2025 at 6:08 PM
In the end, Operator struggled to file my expense reports even after an hour of trying and prompting. Then I took over, and my reports were filed 5 minutes later.

This is the bind for web agents today: not reliable enough to be automatable, not quick enough to save time.
February 3, 2025 at 6:08 PM
OpenAI also trained Operator to ask the user for feedback before taking consequential actions, though I am not sure how robust this is — a simple instruction to avoid asking the user changed its behavior, and I can easily imagine this being exploited by prompt injection attacks.
February 3, 2025 at 6:07 PM
But things went south quickly. It couldn't match the receipts to the amounts. Even after prompts directing it to missing receipts, it couldn't download them. It almost deleted previous receipts from other expenses!
February 3, 2025 at 6:07 PM
It navigated to the correct URLs, asked me to log into my OpenAI and Concur accounts. Once in my accounts, it downloaded receipts from the correct URL, and even started uploading the receipts under the right headings!
February 3, 2025 at 6:07 PM
I asked Operator to file reports for my OpenAI and Anthropic API expenses for the last month. This is a task I do manually each month, so I knew exactly what it would need to do. To my surprise, Operator got the first few steps exactly right:
February 3, 2025 at 6:06 PM
OpenAI's Operator is a web agent that can solve arbitrary tasks on the internet *with human supervision*. It runs on a virtual machine (*not* your computer). Users can see what the agent is doing on the browser in real-time. It is available to ChatGPT Pro subscribers.
February 3, 2025 at 6:05 PM
I spent a few hours with OpenAI's Operator automating expense reports. Most corporate jobs require filing expenses, so Operator could save *millions* of person-hours every year if it gets this right.

Some insights on what worked, what broke, and why this matters for the future of agents 🧵
February 3, 2025 at 6:04 PM
Reposted by Sayash Kapoor
Excellent post discussing whether "AI progress is slowing down".

www.aisnakeoil.com/p/is-ai-prog...

And if you're not subscribed to @randomwalker.bsky.social and @sayash.bsky.social 's great newsletter, what are you waiting for?
Is AI progress slowing down?
Making sense of recent technology trends and claims
www.aisnakeoil.com
December 19, 2024 at 11:57 PM
Reposted by Sayash Kapoor
Excited to share that AI Snake Oil is one of Nature's 10 best books of 2024! www.nature.com/articles/d41...
The whole first chapter is available online:
press.princeton.edu/books/hardco...
We hope you find it useful.
December 18, 2024 at 12:12 PM
Grateful to @katygb.bsky.social for feedback on the draft. Read the full essay (w/@randomwalker.bsky.social): www.aisnakeoil.com/p/we-looked-...
We Looked at 78 Election Deepfakes. Political Misinformation is not an AI Problem.
Technology Isn’t the Problem—or the Solution.
www.aisnakeoil.com
December 16, 2024 at 3:11 PM