AI Companies Sold Us Their Vision of the Future at the Super Bowl. Here’s Why We Should Reject It
Are you feeling like your football-scouting operation has been taking a beating lately?Do you sometimes wonder why your spreadsheets can’t get generated fast enough? Or perhaps your software coding is going slower than you always thought it would?Most of all, does your kid struggle with not being able to imagine the decor of his bedroom in your new home?If any of these problems resonate — and really, what could be more universal? — has Silicon Valley got an AI product for you.You may have noticed Sunday night that these four instances were prime AI use cases per a series of Super Bowl ads from the industry’s biggest players (Microsoft Copilot, unicorn startup GenSpark, OpenAI‘s Codex and Google Gemini, respectively), either solving challenges that don’t exist day-to-day for most Americans or, in the last case, solving a challenge that may actually be a good thing. Any parenting expert will tell you that temporary uncertainty or disappointment can healthily prepare a child for adulthood. But why risk that brief bout of questioning when AI can Magic Erase it from their lives?Of course, we’re acting like the removal of a childhood-development moment is a byproduct of AI adoption and not the whole point. While these ads and the dozen or so more that aired during the game— from both established players like Meta and Anthropic and upstarts like Ramp AI and Artlist — have different visions for how machine thinking will help us, they are nearly all united by a common ideology. Namely: Everyday life is unruly, unknown, hard. Wouldn’t it be nice if a computer happened along to make it easy and guaranteed?If you arrived unformed into the techno-capitalist parade that is the current iteration of the Super Bowl telecast, you would come to at least one very specific conclusion: technology will soon offload so much of our current toil. “It’ll be whatever we want it to be,” the Gemini mother says to her son about their house — AI is apparently manna now — as onscreen a message flashes “A new kind of help from Google.” A more encapsulating set of credos I cannot imagine. Whatever we want! No limitations or consequences! And new help! Who doesn’t want that? Well, compared to the current kind of Googling — the kind that requires critical thinking — it certainly is new. Better? Less clear.Tech revolutions at heart change the mechanisms by which humans live. The automobile lessened our reliance on the horse. This new revolution will lessen our need for a brain. Whether we want what this digital Che will wreak is another matter. Yes, on the surface, this ad spate is about AI products, which is about massive capitalizations, and Wall Street valuations, and many other -ations you hear on CNBC. But such talk of companies and products abstract, purposefully, what’s really being sold.The abstracting reached its pinnacle (nadir) with an insidious Alexa ad featuring Chris Hemsworth and wife Elsa Pataky. He insisted the smart speaker could go sentient in various wildly extravagant ways and kill him — a classic straw man of painting anyone worried about AI Safety as some kind of tinfoil alarmist while cleverly ignoring the actual dangers, like Alexa’s new policy of nonconsensual constant uploading. (See also under Amazon‘s Super Bowl Ring ad for how it saves all the lost dogs while, oh yes, turning on some kind of Big Brother camera for mass surveillance.)“I would never. I’m just here to help,” Alexa tells Hemsworth, which confoundingly seeks to have it both ways: “An AI can’t have murderous feelings; that’s silly. But it can have feelings of help and love!” (Literally an Alexa ad from earlier this football season starring Pete Davidson has him vulnerably telling a computer screen “I like you, too.”)To think about any of these tech company ads for more than five seconds is to realize how little they stand up to scrutiny. Which is exactly how the brands want it generally: feeling more, thinking less.Of course matters aren’t that simple; we’re just not that naive anymore. By now too many of us are wary of what’s being sold — sensitized by two years of deepfakes and soft slop, chastened by two decades of social media and rage-farming. And indeed, in-between the shiny sales pitches came little glimpses of self-own. Anthropic went after OpenAI for how the latter’s ad-based chatbot could be compromised without appearing to realize that asking sensitive information from a chatbot could be dangerous even when it wasn’t trying to sell you something. I’m not sure relying on an LLM to tell you how to navigate your relationship with your mother is so wise even if it refrains from pushing a cougar dating site.And Artlist.io, a little-known video-generation platform, pitched its tools to NY and LA markets with an ad the company told us that as a result of those tools took less than a week to create — or rather, a polar bear reading a voiceover script told us that while, on screen, dogs roasted marshmallows, horses ate from craft services and a person