Dr. Matias Valdenegro
mvaldenegro.bsky.social
Dr. Matias Valdenegro
@mvaldenegro.bsky.social
Tenured Asst Prof at the Uni Groningen, Uncertainty in Machine Learning, Robotics, Chilean, Photographer, Feminist, Snoopy lover, Dr #latinXinAI
Hack the planet!
Today is the 30th anniversary of Hackers
September 16, 2025 at 11:06 AM
Reposted by Dr. Matias Valdenegro
No, you did not give those of us who happened to look like the people who bombed Pearl Harbor any due process. And that was profoundly wrong. It destroyed our lives.
August 8, 2025 at 6:46 PM
Reposted by Dr. Matias Valdenegro
We have to talk about rigor in AI work and what it should entail. The reality is that impoverished notions of rigor do not only lead to some one-off undesirable outcomes but can have a deeply formative impact on the scientific integrity and quality of both AI research and practice 1/
June 18, 2025 at 11:48 AM
Reposted by Dr. Matias Valdenegro
We despise immigrants for not putting down roots, even as we make sure that it is impossible for them to do so. We do this because we have no idea what we want.

open.substack.com/pub/iandunt/...
May 16, 2025 at 9:50 AM
Reposted by Dr. Matias Valdenegro
I'm embarrassed for the New York Times that they published this piece on Ms. Rachel, in which they cite a ridiculous anonymous righwing website Stopantisemitism while indulging the mad, mad claim she may be funded by Hamas (!).

This isn't journalism:
Why Tot Celebrity Ms. Rachel Waded Into the Gaza Debate
www.nytimes.com
May 15, 2025 at 4:07 PM
Reposted by Dr. Matias Valdenegro
Just out! Our peer-reviewed critique of the Cass Review has been published by BMC Medical Research Methodology. Please read and share. We show that the Cass Review is fatally flawed and should not be the basis for policy or practice in transgender healthcare.

link.springer.com/article/10.1...
May 10, 2025 at 12:31 PM
Reposted by Dr. Matias Valdenegro
Aleatoric and epistemic uncertainty are clear-cut concepts, right? ... right? 😵‍💫 In our new ICLR blogpost we let different schools of thought speak and contradict each other, and revisit chatbots where “the character of aleatory ‘transforms’ into epistemic” iclr-blogposts.github.io/2025/blog/re...
May 8, 2025 at 8:18 AM
Reposted by Dr. Matias Valdenegro
May 6, 2025 at 10:38 PM
Reposted by Dr. Matias Valdenegro
I wrote a post on how to connect with people (i.e., make friends) at CS conferences. These events can be intimidating so here's some suggestions on how to navigate them

I'm late for #ICLR2025 #NAACL2025, but in time for #AISTATS2025 #ICML2025! 1/3
kamathematics.wordpress.com/2025/05/01/t...
Tips on How to Connect at Academic Conferences
I was a kinda awkward teenager. If you are a CS researcher reading this post, then chances are, you were too. How to navigate social situations and make friends is not always intuitive, and has to …
kamathematics.wordpress.com
May 1, 2025 at 12:57 PM
Reposted by Dr. Matias Valdenegro
When an AI model for code-editing company Cursor hallucinated a new rule, users revolted. www.wired.com/story/cursor...
An AI Customer Service Chatbot Made Up a Company Policy—and Created a Mess
When an AI model for code-editing company Cursor hallucinated a new rule, users revolted.
www.wired.com
April 19, 2025 at 3:54 PM
Reposted by Dr. Matias Valdenegro
Even accepting the premise that AI produces useful writing (which no one should), using AI in education is like using a forklift at the gym. The weights do not actually need to be moved from place to place. That is not the work. The work is what happens within you.
April 15, 2025 at 2:56 AM
Reposted by Dr. Matias Valdenegro
I see it. I have lived it. 83 years ago, the U.S. government turned upon a group of its own citizens and residents and sent them to internment camps without due process. I was there among them. American fascism is back. It is here. It is now.
April 15, 2025 at 8:30 PM
Reposted by Dr. Matias Valdenegro
So I am leading this group building great teaching materials for scientific rigor (c4r.io). Their first unit is really coming together and I will teach it (Monday, April 21, 2025, 12:00 -1:00pm EST) to see how well it works. Join us: forms.monday.com/forms/7d978e...
Community for Rigor
Reliable research can be complicated to create. So we made a network of essential resources to help you better understand the principles and practices of scientific rigor.Why trust us? Because we’re a...
C4R.io
April 8, 2025 at 11:05 PM
Reposted by Dr. Matias Valdenegro
I've really enjoyed reading this "workography" by Kees van Deemter, whom I've never met but who has had a long career in NLP. Lots of storytelling and reflections on research, moving between institutions and countries, finding mentors, choosing between academia and industry, and more.
April 9, 2025 at 9:34 AM
Reposted by Dr. Matias Valdenegro
This study introduces a method for calibrating certainty expressions, transforming phrases like "Maybe" into probability distributions. This enhances decision-making for radiologists and fine-tunes AI models, improving uncertainty communication. https://arxiv.org/abs/2410.04315
Calibrating Expressions of Certainty
ArXiv link for Calibrating Expressions of Certainty
arxiv.org
April 3, 2025 at 8:20 PM
Reposted by Dr. Matias Valdenegro
Reposted by Dr. Matias Valdenegro
How to Leverage Predictive Uncertainty Estimates for Reducing Catastrophic Forgetting in Online C...

Giuseppe Serra, Ben Werner, Florian Buettner

Action editor: Emmanuel Bengio

https://openreview.net/forum?id=dczXe0S1oL

#forgetting #memory #forget
April 2, 2025 at 12:07 AM
Reposted by Dr. Matias Valdenegro
March 31st is Trans Day of Visibility.
March 31, 2025 at 9:07 PM
Enjoying this game very much!
March 26, 2025 at 11:38 PM
Reposted by Dr. Matias Valdenegro
despite popularised beliefs, LLMs are not fit for medical applications. SoTA models produce "non-trivial levels of hallucinations" even w inference techniques like CoT & search augmented generation: arxiv.org/pdf/2503.05777

of surveyed clinicians, 53% use LLMs daily & 91% encountered hallucinations
March 22, 2025 at 7:08 PM
Reposted by Dr. Matias Valdenegro
While reading Ben Recht's article, I found Foster & Hart (2021) (arxiv.org/abs/2210.07169) quite interesting. The contribution is a proposal of always-calibrated forecaster based on a continuously-relaxed calibration measure. But I actually love their §1.1 motivating calibration.
March 22, 2025 at 12:03 AM
Reposted by Dr. Matias Valdenegro
📣 New paper! The field of AI research is increasingly realising that benchmarks are very limited in what they can tell us about AI system performance and safety. We argue and lay out a roadmap toward a *science of AI evaluation*: arxiv.org/abs/2503.05336 🧵
LinkedIn
This link will take you to a page that’s not on LinkedIn
lnkd.in
March 20, 2025 at 1:28 PM
Reposted by Dr. Matias Valdenegro
91% of medical professionals using LLMs have encountered hallucinations and 84% believe they could impact patient health arxiv.org/abs/2503.05777
Medical Hallucinations in Foundation Models and Their Impact on Healthcare
Foundation Models that are capable of processing and generating multi-modal data have transformed AI's role in medicine. However, a key limitation of their reliability is hallucination, where inaccura...
arxiv.org
March 19, 2025 at 11:09 PM
Reposted by Dr. Matias Valdenegro
"Germany Tried to Silence Me, a UN Official, for Talking About Israel’s Genocidal War in Gaza"

In an exclusive piece for Zeteo, UN Special Rapporteur Francesca Albanese writes about her 5-day trip that exposed Germany's harsh deviation from democratic values:
Germany Tried to Silence Me, a UN Official, for Talking About Israel’s Genocidal War in Gaza
Francesca Albanese on her five-day trip that exposed Germany's harsh deviation from democratic values and shrinking landscape for freedom of expression.
zeteo.com
March 19, 2025 at 5:56 PM