Daniel Scalena
@danielsc4.it
PhDing @unimib 🇮🇹 & @gronlp.bsky.social 🇳🇱, interpretability et similia
danielsc4.it
danielsc4.it
You can easily save up to 65% of compute while improving performance on reasoning tasks 🤯 👀
Meet EAGer: We show that monitoring token-level uncertainty lets LLMs allocate compute dynamically - spending MORE on hard problems, LESS on easy ones.
🧵👇
Meet EAGer: We show that monitoring token-level uncertainty lets LLMs allocate compute dynamically - spending MORE on hard problems, LESS on easy ones.
🧵👇
October 16, 2025 at 12:07 PM
You can easily save up to 65% of compute while improving performance on reasoning tasks 🤯 👀
Meet EAGer: We show that monitoring token-level uncertainty lets LLMs allocate compute dynamically - spending MORE on hard problems, LESS on easy ones.
🧵👇
Meet EAGer: We show that monitoring token-level uncertainty lets LLMs allocate compute dynamically - spending MORE on hard problems, LESS on easy ones.
🧵👇
I’ll be attending the NEMI 2025 workshop this Friday and presenting a poster👇.
Happy to chat about cool interpretability stuff there!
Happy to chat about cool interpretability stuff there!
This Friday NEMI 2025 is at Northeastern in Boston, 8 talks, 24 roundtables, 90 posters; 200+ attendees. Thanks to
goodfire.ai/ for sponsoring! nemiconf.github.io/summer25/
If you can't make it in person, the livestream will be here:
www.youtube.com/live/4BJBis...
goodfire.ai/ for sponsoring! nemiconf.github.io/summer25/
If you can't make it in person, the livestream will be here:
www.youtube.com/live/4BJBis...
New England Mechanistic Interpretability Workshop
About:The New England Mechanistic Interpretability (NEMI) workshop aims to bring together academic and industry researchers from the New England and surround...
www.youtube.com
August 20, 2025 at 10:42 PM
I’ll be attending the NEMI 2025 workshop this Friday and presenting a poster👇.
Happy to chat about cool interpretability stuff there!
Happy to chat about cool interpretability stuff there!
📢 New paper: Applied interpretability 🤝 MT personalization!
We steer LLM generations to mimic human translator styles on literary novels in 7 languages. 📚
SAE steering can beat few-shot prompting, leading to better personalization while maintaining quality.
🧵1/
We steer LLM generations to mimic human translator styles on literary novels in 7 languages. 📚
SAE steering can beat few-shot prompting, leading to better personalization while maintaining quality.
🧵1/
May 23, 2025 at 12:23 PM
📢 New paper: Applied interpretability 🤝 MT personalization!
We steer LLM generations to mimic human translator styles on literary novels in 7 languages. 📚
SAE steering can beat few-shot prompting, leading to better personalization while maintaining quality.
🧵1/
We steer LLM generations to mimic human translator styles on literary novels in 7 languages. 📚
SAE steering can beat few-shot prompting, leading to better personalization while maintaining quality.
🧵1/
Now on 🦋!
Hallo wereld 🐮! We are the Computational Linguistics group at the University of Groningen, follow us for updates about our research in natural language processing, machine learning, speech technology, digital humanities and more!
go.bsky.app/UDf92a2
go.bsky.app/UDf92a2
November 21, 2024 at 2:51 PM
Now on 🦋!
It was great, I'm starting to get tickets for next year!
Inaugurating my bsky account by calling #EMNLP2024 a wrap! Had lots of fun presenting our work with @danielsc4.bsky.social and Jirui, and partied hard at the RiTA 🇮🇹 meetup (60+ people joined!). See you next year in Suzhou! 🇨🇳
November 17, 2024 at 8:32 PM
It was great, I'm starting to get tickets for next year!