Prof. Nava Tintarev
navatintarev.bsky.social
Prof. Nava Tintarev
@navatintarev.bsky.social
(she/her) Full Professor of Explainable AI, University of Maastricht, NL. Lab director of the lab on trustworthy AI in Media (TAIM). Director of Research at the Department of Advanced Computing Sciences. IPN board member (incoming 2026). navatintarev.com
Reposted by Prof. Nava Tintarev
We’re happy to share that our lab has received a Collaboration Award during the ICAI Day in recognition of our dedicated efforts to organize cross-lab events that foster collaboration and amplify our collective impact.
October 30, 2025 at 4:03 PM
Bulat Khaertdinov is presenting our recommender systems demo at the Dutch-Belgian Information Retrieval Workshop. (Photo credit: Alain Starke)
October 27, 2025 at 11:24 AM
Two submissions to be presented at MediaEval. Beyond Similarity: Two-Stage Retrieval for News Image Search (NewsImages) and Early Fusion and Pre-text task learning for Video Memorability Prediction (Memorability track). Led by Bulat Khaertdinov and Aashutosh Ganesh, a.o.
October 27, 2025 at 11:23 AM
Reposted by Prof. Nava Tintarev
🏹 Job alert: Tenure-Track Faculty in Artificial Intelligence and Machine Learning at @cispa.de

📍Saarbrücken & St. Ingbert 🇩🇪
📅 Apply by Nov 18th
🔗 https://career.cispa.de/jobs/tenure-track-faculty-in-artificial-intelligence-and-machine-learning-f-m-d-2025-2026-73
Tenure-Track Faculty in Artificial Intelligence and Machine Learning (f/m/d) 2025/2026 | Career
career.cispa.de
October 22, 2025 at 9:37 AM
What an honor to represent Maastricht university and to highlight the impact of research on society. We (w Cedric Waterschoot) enjoyed talking about how scientists could inform how we see information online.
October 17, 2025 at 6:48 AM
The report of Dagstuhl Seminar 25142 "Explainability in Focus: Advancing Evaluation through Reusable Experiment Design" is now published as part of the periodical Dagstuhl Reports: drops.dagstuhl.de/entities/doc...

Organized by: Simone Stumpf, Elizabeth Daly and Stefano Teso
October 13, 2025 at 8:26 AM
Looking forward to present this position paper at the frontiers in AI at ECAI !

Measuring Explanation Quality — a Path Forward

ecai2025.org/frontiers-in...
LinkedIn
This link will take you to a page that’s not on LinkedIn
lnkd.in
October 13, 2025 at 6:49 AM
On the 16th of October in Leiden, Cedric Waterschoot and I will attend the "Avond van wetenschap & maatschappij" (evening of science and society).
*** Why am I seeing this? ***
💡Provocative thesis: Scientists should play an important role in shaping how people see and interpret information online.
October 13, 2025 at 6:48 AM
“It actually doesn’t take much to be considered a difficult woman. That’s why there are so many of us.” ~ Jane Goodall
October 3, 2025 at 7:49 AM
“You cannot get through a single day without having an impact on the world around you. What you do makes a difference, and you have to decide what kind of difference you want to make." ~ Jane Goodall
October 3, 2025 at 7:48 AM
Reposted by Prof. Nava Tintarev
A preliminary call for papers for #umap2026 is now available on the conference's website. Check it out, mark your calendars, and get to work on those papers. www.um.org/umap2026/cal...
@umapconf.bsky.social (#recsys2025)
Preliminary Call for Papers​ – ACM UMAP 2026
www.um.org
September 24, 2025 at 7:44 AM
Reposted by Prof. Nava Tintarev
I'm hiring again! Please share. I'm recruiting a postdoc research fellow in human-centred AI for scalable decision support. Join us to investigate how to balance scalability and human control in medical decision support. Closing date: 4 October (AEST).
uqtmiller.github.io/recruitment/
Recruitment
uqtmiller.github.io
September 16, 2025 at 4:34 AM
Reposted by Prof. Nava Tintarev
We have two papers accepted by our PhD students, one at SIGIR 2025: “RecGaze: The First Eye Tracking and User Interaction Dataset for Carousel Interfaces” by Jingwei Kang and one at NAACL 2025: “kNN For Whisper And Its Effect On Bias And Speaker Adaptation” by Maya Nachesa. Please check them out!
May 27, 2025 at 10:00 AM
Delayed summer announcement: my new website is up and should be more mobile-friendly than its predecessor.
September 10, 2025 at 1:33 PM
Our joint PhD student Adarsa Sivaprasad is presenting her work at an AI and Healthcare conference: Patient-Centred Explainability in IVF Outcome Prediction. She has been studying what kind of explanations users need from OPIS, which is a tool that predicts the likelihood of success in IVF.
LinkedIn
This link will take you to a page that’s not on LinkedIn
lnkd.in
September 10, 2025 at 1:31 PM
🚀 To summarize, University of Maastricht and our Explainable Artificial Intelligence theme is heading to ACM RecSys 2025 with a line-up of contributions 🎉
✨ Here’s where you can find us:
September 10, 2025 at 1:28 PM
In my Frontiers in Artificial Intelligence talk (at ECAI'25) I will present a position piece on XAI evaluation. I will share insights from nearly 20 years of studying how people interact with explanation interfaces. I draw lessons from multiple research communities: NLP, IR, and ML
LinkedIn
This link will take you to a page that’s not on LinkedIn
lnkd.in
September 10, 2025 at 1:27 PM
Reposted by Prof. Nava Tintarev
🕊️ Lifetime Achievement Award at #ACL2025NLP

A standing ovation for Prof. Kathy McKeown, recipient of the ACL 2025 Lifetime Achievement Award! 🌟
July 30, 2025 at 1:03 PM
Do you have an alternative to facebook groups? I'd like to keep a community (monthly in person events) running, but do not want to force members to stay on Facebook (or pay or sell data for ads).
July 23, 2025 at 7:25 AM
Reposted by Prof. Nava Tintarev
📣 Ever considered applying for a ERC Starting Grant? The 2026 Call for proposals is now open!

Application portal 👉 lnkd.in/dcsPAwqJ
Information for Applicants 👉 lnkd.in/dsE6B8eE

Deadline to apply for #ERCStG is 14 October 2025.
July 9, 2025 at 3:34 PM
What would you answer if this was the review and authorship of grants? (Re: White text prompts in the submission; llms as summarizers)
Serious questions: 1. If a paper is found to have such hidden instructions, what should the consequences be? Just rejected or should publishers take additional action? 2. If reviewers are caught using LLMs, what should the consequences be? I propose a ban on submitting their own work for X years.
"in 2025 we will have flying cars" 😂😂😂
July 6, 2025 at 9:33 AM
Starting point for the discussion on the Thursday morning at the Dagstuhl workshop on human oversight. Now to be validated and broken with different case studies!
July 3, 2025 at 7:12 AM
Also helpful to revisit lessons from cognitive engineering from Tim and Liz yesterday. On this slide some key scientists for a fun quiz, but also figured in the talk was an excellent reading list. Some concepts were also directly informing our Grey mirror exercise today.
July 1, 2025 at 2:58 PM
I love how I posted this talk on linked in while Tim posted here.
Really helped to give structure to the discussion.
At the Dagstuhl Seminar on Challenges of human insight of AI systems, Anna Lauber-Ronsberg from TU Dresden presents a legal perspective, with this lovely high-level view on features of human involvement
July 1, 2025 at 2:54 PM