I Can’t Believe It’s Not Better Initiative
icbinb.bsky.social
I Can’t Believe It’s Not Better Initiative
@icbinb.bsky.social
The ICBINB initiative is a movement within the ML community for well-executed meaningful research beyond bold numbers. The goals of the initiative are to crack open the research process, to re-value unexpected negative results.
icbinb.cc
4 more days to go. Paper submission deadline is coming up soon.
Happy holidays. Our ICLR 2026 workshop, I Can’t Believe It’s Not Better, is now open for submissions. If you’ve been thinking about where LLMs still fall short, details are on the website.
January 26, 2026 at 8:01 PM
The I Can’t Believe It’s Not Better workshop has been accepted at ICLR 2026. This edition focuses on Where Large Language Models Need to Improve, highlighting limitations, negative results, and careful analyses that are often overlooked but critical for real progress. Call for papers coming soon.
Glad to share that our I Can’t Believe It’s Not Better workshop has been accepted at ICLR 2026. This year we focus on creating space to examine where large language models still fall short and why negative results and careful analysis matter. Call for papers coming soon.
ICLR Workshop 2026
Current LLMs still fall short in surprising ways. Let’s face those gaps, learn from failures, and move the field forward together!
shorturl.at
December 15, 2025 at 8:37 PM
(6/n) Yutaro Yamada (Sakana AI) will present “The AI Scientist-v2” — an agentic system that wrote & submitted papers to our workshop. What does this mean for peer review, novelty, and research norms?
April 22, 2025 at 8:35 PM
(5/n) Roberta Raileanu (Meta) will talk about using LLMs to automate scientific discovery—how far are we really? She’ll introduce MLGym & show why real-world AI research is still far from solved.
April 22, 2025 at 8:35 PM
(4/n) John Kalantari (YRIKKA / Univ. of Minnesota) will tackle the myth of generalizability in DL—why context isn’t a footnote, but the key to real-world impact. Especially in high-stakes fields like healthcare.
April 22, 2025 at 8:35 PM
(3/n) Otilia Stretcu (Google Research) will unpack why classification still breaks in the real world—and how LLMs might help (or not). From safety to niche data, it’s messier than the benchmarks suggest.
April 22, 2025 at 8:35 PM
(2/n) Nick Haber (Stanford) will explore where reasoning-capable LLMs still fall short — and what that means for learning, education, and human-AI collaboration.
April 22, 2025 at 8:35 PM
🔥 Speaker spotlight for I Can’t Believe It’s Not Better #ICLR2025!
(1/n) Sunayana Sitaram (Microsoft Research) will dive into how LLMs fail in non-dominant languages—mistranslations, bias, and cultural blind spots. What does it mean for inclusion in AI?
April 22, 2025 at 8:35 PM
ICLR 2025 starts next week!
Our workshop “I Can’t Believe It’s Not Better” is on April 28 (Singapore time) — join us to talk real-world challenges in applying foundation model and deep learning across fields.
April 18, 2025 at 5:32 PM
Our #ICLR2025 @iclr-conf.bsky.social workshop is looking for submissions on unexpected outcomes and hard-earned lessons in real life #DeepLearning
Submission Deadline: 03 February 2025
Workshop Dates: 27 or 28 April 2025
Location: Singapore
More info: shorturl.at/OpQns
ICLR Workshop 2025 - Call for Papers
We invite researchers and industry professionals to submit their papers on negative results, failed experiments, and unexpected challenges encountered in applying deep learning to real-world problems ...
shorturl.at
January 22, 2025 at 8:05 PM
Please spread the word! Our “I Can’t Believe It’s Not Better” #ICLR2025 workshop @iclr-conf.bsky.social is now accepting submissions on unexpected results in applied #DeepLearning. Share your tough lessons and unexpected outcomes by Feb 3, 2025. More info: shorturl.at/tD1ju
ICLR Workshop 2025 - Call for Papers
We invite researchers and industry professionals to submit their papers on negative results, failed experiments, and unexpected challenges encountered in applying deep learning to real-world problems ...
shorturl.at
January 10, 2025 at 5:32 PM
Reposted by I Can’t Believe It’s Not Better Initiative
The call for papers is out for our “I Can’t Believe It’s Not Better” #ICLR2025 workshop @iclr-conf.bsky.social . Don’t miss this chance to highlight challenges in applying deep learning & foundation models in real life.
Attention ML researchers! We invite you to submit your work to the “I Can’t Believe It’s Not Better” workshop at #ICLR2025. Let’s shine a light on those unexpected outcomes, elusive improvements, and the tough lessons learned in applied #DeepLearning. Deadline is Feb 3, 2025. More details:
ICLR Workshop 2025 - Call for Papers
We invite researchers and industry professionals to submit their papers on negative results, failed experiments, and unexpected challenges encountered in applying deep learning to real-world problems ...
shorturl.at
December 17, 2024 at 9:13 PM
Please help spread the word. Our flyer:
December 17, 2024 at 9:11 PM
Attention ML researchers! We invite you to submit your work to the “I Can’t Believe It’s Not Better” workshop at #ICLR2025. Let’s shine a light on those unexpected outcomes, elusive improvements, and the tough lessons learned in applied #DeepLearning. Deadline is Feb 3, 2025. More details:
ICLR Workshop 2025 - Call for Papers
We invite researchers and industry professionals to submit their papers on negative results, failed experiments, and unexpected challenges encountered in applying deep learning to real-world problems ...
shorturl.at
December 17, 2024 at 9:08 PM