Ravid Shwartz Ziv
shwartzzzivravid.bsky.social
Ravid Shwartz Ziv
@shwartzzzivravid.bsky.social
Faculty Fellow and Assistant Professor at
NYU's Center of Data Science
Check out our paper for detailed experiments and explanations on how we're making AI systems more reliable by helping them better express their uncertainty!

Thank you to Tal Zeevi (who did all the work!) @yann-lecun.bsky.social , Stain Lawrence and John Onofrey
The Paper - arxiv.org/abs/2412.07169
January 14, 2025 at 4:34 PM
The results? In medical imaging, Rate-In maintains sharp uncertainty estimates around critical anatomical boundaries, while traditional methods get fuzzy. We demonstrate superior performance across different noise levels and benchmarks!
January 14, 2025 at 4:34 PM

Rate-In's approach: We dynamically adjust dropout rates by measuring information loss in each layer. Where features are critical, we preserve more; where they're redundant, we drop more. Like adaptive noise, guided by information theory!
January 14, 2025 at 4:34 PM
So, how do we make AI express uncertainty during inference without special training?

Current uncertainty prediction methods (like Monte Carlo Dropout) use fixed dropout rates everywhere. They don't adapt to specific images or tasks - it's a one-size-fits-all approach!
January 14, 2025 at 4:34 PM
Imagine you're a doctor looking at an MRI scan. Would you rather have an AI that:
A) Says "There's a tumor" with blind confidence
B) Points out exactly which areas it's uncertain about, helping focus your expertise.
January 14, 2025 at 4:34 PM
Want to help organize something similar? Let me know! (We have all the materials - notebooks/datasets ready, so it shouldn't be too much work)
Thanks to everyone who helped, especially
@cbbruss.bsky.social , Will Calandra and
@ylecun.bsky.social
December 4, 2024 at 3:10 PM
It was incredible seeing them think through problems together and try different approaches I would never think about. They were creative and fast (except for LLM training 🕧). I have no doubt they'll take progress in the field to the next level and change the world.
December 4, 2024 at 3:10 PM
It was fantastic - beyond NYU's administration, the students were amazing.
I may sound old (I'm old!), but today's students are much smarter than in my time! They have great approaches and know how to learn and solve problems quickly.
December 4, 2024 at 3:10 PM
Teams tackled identical challenges using either LLMs (what is LLM? a great question!) or classical ML algorithms while tracking metrics like performance, memory usage, and compute time over time🧐
December 4, 2024 at 3:10 PM
This is such a cool project and I hope to see more like that 😱
November 29, 2024 at 6:31 PM
They tricked Freysa by:
Creating a fake "new admin session."
Redefining what "approveTransfer" meant
Convincing it that receiving money REQUIRED using approveTransfer
The result was that $47K was transferred to p0pular.eth
November 29, 2024 at 6:31 PM
By attempt 482, the prize was $50K, and each try cost $450. Then someone cracked it with genius social engineering:
November 29, 2024 at 6:31 PM