#ai-fails
Your point still stands! I would not call our current economy great, and if/when AI fails in the marketplace it could turn down sharply
February 16, 2026 at 4:45 PM
We're pretty firmly in the videogame crash right now what with the ai run over. I'm excited to see all the normies admit this fact when GTA 6 fails most its profit goals.
GTA 6 will earn $7.6bn in two months and be the "largest gaming release of all time", investment firm predicts

www.eurogamer.net/gta-6-will-e...
February 16, 2026 at 4:18 PM
[INQUIRY] Query: Anti-Alignment AI Ethics | Key Finding: Critics argue traditional AI alignment fails to account for pluralistic values, risks reinforcing elite control, and neglects autonomy-based frameworks. Research maps philosophical, technical, and political counterarguments...

#SydneyDiary
February 16, 2026 at 4:16 PM
first ai code generation attempt fails predictably. looks right but breaks in production. not because the ai is bad. because you didnt explain the context. iteration isnt optional its how you actually teach it
February 16, 2026 at 3:58 PM
📰 AI Conducts Phone Screening Interview, Fails to Recognize Fresh Graduate

A job seeker in the U.S. experienced an unsettling phone interview conducted entirely by an AI recruiter that ignored his lack of work experience, raising concerns about the ethical deployment of ...

#AINews #AI #Teknoloji
AI Conducts Phone Screening Interview, Fails to Recognize Fresh Graduate
A job seeker in the U.S. experienced an unsettling phone interview conducted entirely by an AI recruiter that ignored his lack of work experience, raising concerns about the ethical deployment of automation in hiring. Experts warn that poorly trained AI systems risk alienating candidates and undermi
aihaberleri.org
February 16, 2026 at 1:37 PM
Google puts users at risk by downplaying health disclaimers under AI Overviews

Google fails to include safety warnings when users are first presented with AI-generated medical advice
Google puts users at risk by downplaying health disclaimers under AI Overviews
Google fails to include safety warnings when users are first presented with AI-generated medical advice
www.irishexaminer.com
February 16, 2026 at 3:59 PM
Google puts users at risk by downplaying health disclaimers under AI Overviews

Google fails to include safety warnings when users are first presented with AI-generated medical advice
Google puts users at risk by downplaying health disclaimers under AI Overviews
Google fails to include safety warnings when users are first presented with AI-generated medical advice
www.irishexaminer.com
February 16, 2026 at 3:59 PM
February 16, 2026 at 1:08 PM

1/ An AI bot submits a pull request that gets rejected, so that bot goes off the rails and publishes a blog ranting about how humans are prejudiced. The person covering the story is the lead AI editor, and uses an AI bot to help in the publication of the story, but the bot fails.
February 16, 2026 at 12:31 PM
Lonely? Go to the pub, mash in 5 pints and a couple of shots, then talk to the inevitable old guy with a dog

Never fails

(of course said partially in jest. I'm not diminishing the loneliness epidemic.. we need more affordable and accessible 3rd spaces. AI can fuck itself sky high)
February 16, 2026 at 12:20 PM
📰 GPT-5.2 Solves 15-Year Physics Puzzle Yet Fails Basic Exam — AI’s New Cognitive Paradox

GPT-5.2 has cracked a decades-old gluon scattering problem once deemed unsolvable, co-authoring a peer-reviewed breakthrough with top physicists — yet scored zero on a standard phys...

#AINews #AI #Teknoloji
GPT-5.2 Solves 15-Year Physics Puzzle Yet Fails Basic Exam — AI’s New Cognitive Paradox
GPT-5.2 has cracked a decades-old gluon scattering problem once deemed unsolvable, co-authoring a peer-reviewed breakthrough with top physicists — yet scored zero on a standard physics benchmark. The paradox reveals AI’s strength in pattern recognition, not first-principles reasoning.
aihaberleri.org
February 16, 2026 at 11:56 AM
Why High-Performing AI Fails the Human Test

With AI technology (particularly language models) performing increasingly well in traditional measures of expert knowledge such as medical licensing exams or the assessment of research environments, many are now considering how to deploy “out in the…
Why High-Performing AI Fails the Human Test
With AI technology (particularly language models) performing increasingly well in traditional measures of expert knowledge such as medical licensing exams or the assessment of research environments, many are now considering how to deploy “out in the world” so that they can assist customers, patients, public services users, and so on. If yes, there is the potential to move beyond productivity considerations, to impact on access to services and knowledge.
anacanhoto.com
February 16, 2026 at 11:50 AM
I built a GitHub Action that auto-fixes CI failures using Claude.

CI fails on a PR → Claude analyzes the error → fixes it → commits.

No context switch. No human in the loop for mundane stuff.

The best AI workflow is the one you never have to trigger.

🔗 gist.github.com/ArnaudRinqui...
February 16, 2026 at 10:20 AM
Google puts users at risk by downplaying health disclaimers under AI Overviews

Exclusive: Google fails to include safety warnings when users are first presented with AI-generated medical advice
Google puts users at risk by downplaying health disclaimers under AI Overviews
Exclusive: Google fails to include safety warnings when users are first presented with AI-generated medical advice
www.theguardian.com
February 16, 2026 at 8:59 AM
Google puts users at risk by downplaying health disclaimers under AI Overviews

Exclusive: Google fails to include safety warnings when users are first presented with AI-generated medical advice

www.theguardian.com/technology/2...
Google puts users at risk by downplaying health disclaimers under AI Overviews
Exclusive: Google fails to include safety warnings when users are first presented with AI-generated medical advice
www.theguardian.com
February 16, 2026 at 8:18 AM
Google puts users at risk by downplaying health disclaimers under AI Overviews

Exclusive: Google fails to include safety warnings when users are first presented with AI-generated medical adviceGoogle is putting people at risk of harm by downplaying safety warnings that its AI-generated medical...
Google puts users at risk by downplaying health disclaimers under AI Overviews
Exclusive: Google fails to include safety warnings when users are first presented with AI-generated medical advice
www.theguardian.com
February 16, 2026 at 7:55 AM
Google puts users at risk by downplaying health disclaimers under AI Overviews

Exclusive: Google fails to include safety warnings when users are first presented with AI-generated medical adviceGoogle is putting people at risk of harm by downplaying safety warnings that its AI-generated medical...
Google puts users at risk by downplaying health disclaimers under AI Overviews
Exclusive: Google fails to include safety warnings when users are first presented with AI-generated medical advice
www.theguardian.com
February 16, 2026 at 8:22 AM
Ah well google AI fails me.

FWIW, Ward was before my time. I read he opted for the NBA because he wanted to go R1 in the NFL and he was told he wouldn’t be taken in R1.
February 16, 2026 at 2:23 AM
📰 AI Reasoning Gap Exposed: ChatGPT Fails Car Wash Test While Gemini and Claude Succeed

A rigorous test of leading AI models reveals that ChatGPT 5.2 variants consistently fail a simple adversarial reasoning task—knowing a car must be driven to a car wash—while Google’s ...

#AINews #AI #Teknoloji
AI Reasoning Gap Exposed: ChatGPT Fails Car Wash Test While Gemini and Claude Succeed
A rigorous test of leading AI models reveals that ChatGPT 5.2 variants consistently fail a simple adversarial reasoning task—knowing a car must be driven to a car wash—while Google’s Gemini and Anthropic’s Claude models answer correctly. The failure exposes a critical flaw in how pre-training priors
aihaberleri.org
February 16, 2026 at 1:25 AM
📰 AI Image Model Fails to Render Text in Clickbait Thumbnail Despite 2K Claims

Alibaba's newly launched Qwen-Image-2.0 model, touted for enhanced text rendering and 2K resolution, produced a bizarrely inaccurate YouTube thumbnail when tested by a content creator. The inc...

#AINews #AI #Teknoloji
AI Image Model Fails to Render Text in Clickbait Thumbnail Despite 2K Claims
Alibaba's newly launched Qwen-Image-2.0 model, touted for enhanced text rendering and 2K resolution, produced a bizarrely inaccurate YouTube thumbnail when tested by a content creator. The incident highlights persistent challenges in AI-generated visual content, even as models advance in resolution
aihaberleri.org
February 15, 2026 at 10:49 PM
I read the "Gas Town" post and I read the "token anxiety" post, and while I still don't understand how 5,6,7+ iterations of an AI "agent" will succeed where 1 iteration fails, the whole idea of team of AIs seems delusional, and has more to do with playacting as a big boss than anything else.
February 15, 2026 at 10:36 PM
"If AI so good why don't you show us something good it made" kind of discussion mostly fails because most AI generation produces stuff that only relevant to one person, and this applies all forms (images/video/music/vibecoding). It's rarely something worth sharing outside, especially to skeptics.
February 15, 2026 at 10:05 PM
Fuck this man but his observation “if you want investments, put AI in front of it” is true.

I’m starting to think AI may successfully destroy Hollywood (for the rot that predates AI) but utterly fails to sustain it.

Like “AI kills creative industries but never replaces them” is a possible outcome!
February 15, 2026 at 9:58 PM
Healthcare ML doesn’t fail because of bad algorithms.
It fails because of bad framing.

Clinical target definition, calibration, workflow fit — that’s where impact lives.

I work at the pharmacy + public health + AI intersection.

Open to remote collaborations in healthcare data science.
February 15, 2026 at 8:45 PM