Thomas George
tfjgeorge.bsky.social
Thomas George
@tfjgeorge.bsky.social
Explainability of deep neural nets and causality https://tfjgeorge.github.io/
Reposted by Thomas George
Ilies Chibane, Thomas George, Pierre Nodet, Vincent Lemaire: Calibration improves detection of mislabeled examples https://arxiv.org/abs/2511.02738 https://arxiv.org/pdf/2511.02738 https://arxiv.org/html/2511.02738
November 5, 2025 at 6:34 AM
Reposted by Thomas George
Aziz Bacha, Thomas George
Training Feature Attribution for Vision Models
https://arxiv.org/abs/2510.09135
October 13, 2025 at 7:09 AM
Reposted by Thomas George
📢 Talk Announcement

"Unlock the full predictive power of your multi-table data", by Luc-Aurélien Gauthier and Alexis Bondu

📜 Talk info: pretalx.com/pydata-paris-2025/talk/H9X8TG
📅 Schedule: pydata.org/paris2025/schedule
🎟 Tickets: pydata.org/paris2025/tickets
August 14, 2025 at 7:01 AM
PhD offer at Orange Innov in Paris: example-based explainability of deep networks' predictions.

Please share with interested candidates, or do not hesitate to reach out to me for further information 😁
PhD thesis : Explaining “black box” AI algorithms through their training examples
Global context Recent advances in machine learning have led to new AI applications promising increased automation of new tasks to enhance operational efficiency or relie...
orange.jobs
March 14, 2025 at 9:12 AM
A unified view of mislabeling detection methods using a simple principle: your trained machine learning model knows more about your data than what you usually query it for (i.e., its predicted class). Instead, there are many other ways to *probe* it.

www.youtube.com/watch?v=fT9V...
December 17, 2024 at 9:34 AM