Simone Schaub-Meyer
@simoneschaub.bsky.social
Assistant Professor of Computer Science at TU Darmstadt, Member of @ellis.eu, DFG #EmmyNoether Fellow, PhD @ETH
Computer Vision & Deep Learning
Computer Vision & Deep Learning
Reposted by Simone Schaub-Meyer
📢🎓 We have open PhD positions in Computer Vision & Machine Learning at @tuda.bsky.social and @hessianai.bsky.social within the Reasonable AI Cluster of Excellence — supervised by @stefanroth.bsky.social, @simoneschaub.bsky.social and many others!
www.career.tu-darmstadt.de/tu-darmstadt...
www.career.tu-darmstadt.de/tu-darmstadt...
www.career.tu-darmstadt.de
November 4, 2025 at 2:04 PM
📢🎓 We have open PhD positions in Computer Vision & Machine Learning at @tuda.bsky.social and @hessianai.bsky.social within the Reasonable AI Cluster of Excellence — supervised by @stefanroth.bsky.social, @simoneschaub.bsky.social and many others!
www.career.tu-darmstadt.de/tu-darmstadt...
www.career.tu-darmstadt.de/tu-darmstadt...
🎉 Today, Simon Kiefhaber will present our ICCV oral paper on how to make optical flow estimators more efficient (faster inference and lower memory usage) with state-of-the-art accuracy:
🌍 visinf.github.io/recover
Talk: Tue 09:30 AM, Kalakaua Ballroom
Poster: Tue 11:45 AM, Exhibit Hall I #76
🌍 visinf.github.io/recover
Talk: Tue 09:30 AM, Kalakaua Ballroom
Poster: Tue 11:45 AM, Exhibit Hall I #76
October 21, 2025 at 7:13 PM
🎉 Today, Simon Kiefhaber will present our ICCV oral paper on how to make optical flow estimators more efficient (faster inference and lower memory usage) with state-of-the-art accuracy:
🌍 visinf.github.io/recover
Talk: Tue 09:30 AM, Kalakaua Ballroom
Poster: Tue 11:45 AM, Exhibit Hall I #76
🌍 visinf.github.io/recover
Talk: Tue 09:30 AM, Kalakaua Ballroom
Poster: Tue 11:45 AM, Exhibit Hall I #76
Reposted by Simone Schaub-Meyer
📢Excited to share our IROS 2025 paper “Boosting Omnidirectional Stereo Matching with a Pre-trained Depth Foundation Model”!
Work by Jannik Endres, @olvrhhn.bsky.social, Charles Cobière, @simoneschaub.bsky.social, @stefanroth.bsky.social and Alexandre Alahi.
Work by Jannik Endres, @olvrhhn.bsky.social, Charles Cobière, @simoneschaub.bsky.social, @stefanroth.bsky.social and Alexandre Alahi.
October 17, 2025 at 9:27 PM
📢Excited to share our IROS 2025 paper “Boosting Omnidirectional Stereo Matching with a Pre-trained Depth Foundation Model”!
Work by Jannik Endres, @olvrhhn.bsky.social, Charles Cobière, @simoneschaub.bsky.social, @stefanroth.bsky.social and Alexandre Alahi.
Work by Jannik Endres, @olvrhhn.bsky.social, Charles Cobière, @simoneschaub.bsky.social, @stefanroth.bsky.social and Alexandre Alahi.
Reposted by Simone Schaub-Meyer
[1/8] We are presenting four main conference papers, two workshop papers, and a workshop at @iccv.bsky.social 2025 in Hawaii! 🎉🏝
October 19, 2025 at 3:35 PM
[1/8] We are presenting four main conference papers, two workshop papers, and a workshop at @iccv.bsky.social 2025 in Hawaii! 🎉🏝
Reposted by Simone Schaub-Meyer
🎓 Looking for a PhD position in computer vision? Apply to the European Laboratory for Learning & Intelligent Systems (ELLIS) and work with @stefanroth.bsky.social & @simoneschaub.bsky.social! Join the info session on Oct 1.
@ellis.eu @tuda.bsky.social
ellis.eu/news/ellis-p...
@ellis.eu @tuda.bsky.social
ellis.eu/news/ellis-p...
ELLIS PhD Program: Call for Applications 2025
The ELLIS mission is to create a diverse European network that promotes research excellence and advances breakthroughs in AI, as well as a pan-European PhD program to educate the next generation of AI...
ellis.eu
September 29, 2025 at 9:35 AM
🎓 Looking for a PhD position in computer vision? Apply to the European Laboratory for Learning & Intelligent Systems (ELLIS) and work with @stefanroth.bsky.social & @simoneschaub.bsky.social! Join the info session on Oct 1.
@ellis.eu @tuda.bsky.social
ellis.eu/news/ellis-p...
@ellis.eu @tuda.bsky.social
ellis.eu/news/ellis-p...
Reposted by Simone Schaub-Meyer
We are presenting five papers at the DAGM German Conference on Pattern Recognition (GCPR, @gcpr-by-dagm.bsky.social) in Freiburg this week!
September 23, 2025 at 5:46 PM
We are presenting five papers at the DAGM German Conference on Pattern Recognition (GCPR, @gcpr-by-dagm.bsky.social) in Freiburg this week!
Reposted by Simone Schaub-Meyer
Efficient Masked Attention Transformer for Few-Shot Classification and Segmentation (GCPR 2025)
by @dustin-carrion.bsky.social, @stefanroth.bsky.social, and @simoneschaub.bsky.social
🌍: visinf.github.io/emat
Poster: Wednesday, 03:30 PM, Postern 8
by @dustin-carrion.bsky.social, @stefanroth.bsky.social, and @simoneschaub.bsky.social
🌍: visinf.github.io/emat
Poster: Wednesday, 03:30 PM, Postern 8
September 23, 2025 at 5:46 PM
Efficient Masked Attention Transformer for Few-Shot Classification and Segmentation (GCPR 2025)
by @dustin-carrion.bsky.social, @stefanroth.bsky.social, and @simoneschaub.bsky.social
🌍: visinf.github.io/emat
Poster: Wednesday, 03:30 PM, Postern 8
by @dustin-carrion.bsky.social, @stefanroth.bsky.social, and @simoneschaub.bsky.social
🌍: visinf.github.io/emat
Poster: Wednesday, 03:30 PM, Postern 8
Reposted by Simone Schaub-Meyer
Removing Cost Volumes from Optical Flow Estimators (ICCV 2025 Oral)
by @skiefhaber.de, @stefanroth.bsky.social, and @simoneschaub.bsky.social
🌍: visinf.github.io/recover
Poster: Friday, 10:30 AM, Poster 14
by @skiefhaber.de, @stefanroth.bsky.social, and @simoneschaub.bsky.social
🌍: visinf.github.io/recover
Poster: Friday, 10:30 AM, Poster 14
September 23, 2025 at 5:46 PM
Removing Cost Volumes from Optical Flow Estimators (ICCV 2025 Oral)
by @skiefhaber.de, @stefanroth.bsky.social, and @simoneschaub.bsky.social
🌍: visinf.github.io/recover
Poster: Friday, 10:30 AM, Poster 14
by @skiefhaber.de, @stefanroth.bsky.social, and @simoneschaub.bsky.social
🌍: visinf.github.io/recover
Poster: Friday, 10:30 AM, Poster 14
Reposted by Simone Schaub-Meyer
🚀 Open-Mic Opinions! 🚀
We welcome you to voice your opinion on the state of XAI. You get 5 minutes to speak (in-person only) during the workshop.
📷 Submit your proposals here: lnkd.in/d7_EWKXp
For more details: lnkd.in/dpYWVYXS
@iccv.bsky.social #ICCV2025 #eXCV
We welcome you to voice your opinion on the state of XAI. You get 5 minutes to speak (in-person only) during the workshop.
📷 Submit your proposals here: lnkd.in/d7_EWKXp
For more details: lnkd.in/dpYWVYXS
@iccv.bsky.social #ICCV2025 #eXCV
September 16, 2025 at 1:48 PM
🚀 Open-Mic Opinions! 🚀
We welcome you to voice your opinion on the state of XAI. You get 5 minutes to speak (in-person only) during the workshop.
📷 Submit your proposals here: lnkd.in/d7_EWKXp
For more details: lnkd.in/dpYWVYXS
@iccv.bsky.social #ICCV2025 #eXCV
We welcome you to voice your opinion on the state of XAI. You get 5 minutes to speak (in-person only) during the workshop.
📷 Submit your proposals here: lnkd.in/d7_EWKXp
For more details: lnkd.in/dpYWVYXS
@iccv.bsky.social #ICCV2025 #eXCV
Reposted by Simone Schaub-Meyer
Some impressions from our VISINF summer retreat at Lizumer Hütte in the Tirol Alps — including a hike up Geier Mountain and new research ideas at 2,857 m! 🇦🇹🏔️
August 29, 2025 at 12:48 PM
Some impressions from our VISINF summer retreat at Lizumer Hütte in the Tirol Alps — including a hike up Geier Mountain and new research ideas at 2,857 m! 🇦🇹🏔️
Reposted by Simone Schaub-Meyer
🚨Deadline Approaching! 🚨
Non-Proceedings track closes in 2 days!
Be sure to submit on time!
We are awaiting your submissions!
More info at: excv-workshop.github.io
@iccv.bsky.social #ICCV2025 #eXCV
Non-Proceedings track closes in 2 days!
Be sure to submit on time!
We are awaiting your submissions!
More info at: excv-workshop.github.io
@iccv.bsky.social #ICCV2025 #eXCV
August 13, 2025 at 8:33 AM
🚨Deadline Approaching! 🚨
Non-Proceedings track closes in 2 days!
Be sure to submit on time!
We are awaiting your submissions!
More info at: excv-workshop.github.io
@iccv.bsky.social #ICCV2025 #eXCV
Non-Proceedings track closes in 2 days!
Be sure to submit on time!
We are awaiting your submissions!
More info at: excv-workshop.github.io
@iccv.bsky.social #ICCV2025 #eXCV
Reposted by Simone Schaub-Meyer
Got a strong XAI paper rejected from ICCV? Submit it to our ICCV eXCV Workshop today—we welcome high-quality work!
🗓️ Submissions open until June 26 AoE.
📄 Got accepted to ICCV? Congrats! Consider our non-proceedings track.
#ICCV2025 @iccv.bsky.social
🗓️ Submissions open until June 26 AoE.
📄 Got accepted to ICCV? Congrats! Consider our non-proceedings track.
#ICCV2025 @iccv.bsky.social
June 26, 2025 at 9:22 AM
Got a strong XAI paper rejected from ICCV? Submit it to our ICCV eXCV Workshop today—we welcome high-quality work!
🗓️ Submissions open until June 26 AoE.
📄 Got accepted to ICCV? Congrats! Consider our non-proceedings track.
#ICCV2025 @iccv.bsky.social
🗓️ Submissions open until June 26 AoE.
📄 Got accepted to ICCV? Congrats! Consider our non-proceedings track.
#ICCV2025 @iccv.bsky.social
Reposted by Simone Schaub-Meyer
Join us in taking stock of the state of the field of explainability in computer vision, at our Workshop on Explainable Computer Vision: Quo Vadis? at #ICCV2025!
@iccv.bsky.social
@iccv.bsky.social
June 14, 2025 at 3:48 PM
Join us in taking stock of the state of the field of explainability in computer vision, at our Workshop on Explainable Computer Vision: Quo Vadis? at #ICCV2025!
@iccv.bsky.social
@iccv.bsky.social
Reposted by Simone Schaub-Meyer
Reasonable Artificial Intelligence und The Adaptive Mind: Die TU Darmstadt wird im Rahmen der Exzellenzstrategie des Bundes und der Länder mit gleich zwei geförderten Clusterprojekten ausgezeichnet. Ein Meilenstein für unsere Universität! www.tu-darmstadt.de/universitaet...
Zwei Exzellenzcluster für die TU Darmstadt
Großer Erfolg für die Technische Universität Darmstadt: Zwei ihrer Forschungsprojekte werden künftig als Exzellenzcluster gefördert. Die Exzellenzkommission im Wettbewerb der prestigeträchtigen Exzell...
www.tu-darmstadt.de
May 22, 2025 at 4:20 PM
Reasonable Artificial Intelligence und The Adaptive Mind: Die TU Darmstadt wird im Rahmen der Exzellenzstrategie des Bundes und der Länder mit gleich zwei geförderten Clusterprojekten ausgezeichnet. Ein Meilenstein für unsere Universität! www.tu-darmstadt.de/universitaet...
Reposted by Simone Schaub-Meyer
"Reasonable AI" got selected as a cluster of excellence www.tu-darmstadt.de/universitaet...
Overwhelmingly happy to be part of RAI & continue working with the smart minds at TU Darmstadt & hessian.AI, while also seeing my new home at Uni Bremen achieve a historic success in the excellence strategy!
Overwhelmingly happy to be part of RAI & continue working with the smart minds at TU Darmstadt & hessian.AI, while also seeing my new home at Uni Bremen achieve a historic success in the excellence strategy!
May 23, 2025 at 11:57 AM
"Reasonable AI" got selected as a cluster of excellence www.tu-darmstadt.de/universitaet...
Overwhelmingly happy to be part of RAI & continue working with the smart minds at TU Darmstadt & hessian.AI, while also seeing my new home at Uni Bremen achieve a historic success in the excellence strategy!
Overwhelmingly happy to be part of RAI & continue working with the smart minds at TU Darmstadt & hessian.AI, while also seeing my new home at Uni Bremen achieve a historic success in the excellence strategy!
Reposted by Simone Schaub-Meyer
📢 #CVPR2025 Highlight: Scene-Centric Unsupervised Panoptic Segmentation 🔥
We present CUPS, the first unsupervised panoptic segmentation method trained directly on scene-centric imagery.
Using self-supervised features, depth & motion, we achieve SotA results!
🌎 visinf.github.io/cups
We present CUPS, the first unsupervised panoptic segmentation method trained directly on scene-centric imagery.
Using self-supervised features, depth & motion, we achieve SotA results!
🌎 visinf.github.io/cups
April 4, 2025 at 1:38 PM
📢 #CVPR2025 Highlight: Scene-Centric Unsupervised Panoptic Segmentation 🔥
We present CUPS, the first unsupervised panoptic segmentation method trained directly on scene-centric imagery.
Using self-supervised features, depth & motion, we achieve SotA results!
🌎 visinf.github.io/cups
We present CUPS, the first unsupervised panoptic segmentation method trained directly on scene-centric imagery.
Using self-supervised features, depth & motion, we achieve SotA results!
🌎 visinf.github.io/cups
Reposted by Simone Schaub-Meyer
Why has continual ML not had its breakthrough yet?
In our new collaborative paper w/ many amazing authors, we argue that “Continual Learning Should Move Beyond Incremental Classification”!
We highlight 5 examples to show where CL algos can fail & pinpoint 3 key challenges
arxiv.org/abs/2502.11927
In our new collaborative paper w/ many amazing authors, we argue that “Continual Learning Should Move Beyond Incremental Classification”!
We highlight 5 examples to show where CL algos can fail & pinpoint 3 key challenges
arxiv.org/abs/2502.11927
February 18, 2025 at 1:33 PM
Why has continual ML not had its breakthrough yet?
In our new collaborative paper w/ many amazing authors, we argue that “Continual Learning Should Move Beyond Incremental Classification”!
We highlight 5 examples to show where CL algos can fail & pinpoint 3 key challenges
arxiv.org/abs/2502.11927
In our new collaborative paper w/ many amazing authors, we argue that “Continual Learning Should Move Beyond Incremental Classification”!
We highlight 5 examples to show where CL algos can fail & pinpoint 3 key challenges
arxiv.org/abs/2502.11927
Reposted by Simone Schaub-Meyer
🏔️⛷️ Looking back on a fantastic week full of talks, research discussions, and skiing in the Austrian mountains!
January 31, 2025 at 7:38 PM
🏔️⛷️ Looking back on a fantastic week full of talks, research discussions, and skiing in the Austrian mountains!
Reposted by Simone Schaub-Meyer
Excited to share that today our paper recommender platform www.scholar-inbox.com has reached 20k users! We hope to reach 100k by the end of the year.. Lots of new features are being worked on currently and rolled out soon.
January 15, 2025 at 10:03 PM
Excited to share that today our paper recommender platform www.scholar-inbox.com has reached 20k users! We hope to reach 100k by the end of the year.. Lots of new features are being worked on currently and rolled out soon.
Reposted by Simone Schaub-Meyer
Verstehen, was KI-Modelle können – und was nicht: Interview mit @simoneschaub.bsky.social, Early-Career-Forscherin im Clusterprojekt „RAI“ (Reasonable Artificial Intelligence).
"RAI" ist eines der Projekte, mit denen sich die TUDa um einen Exzellenzcluster bewirbt.
www.youtube.com/watch?v=2VAm...
"RAI" ist eines der Projekte, mit denen sich die TUDa um einen Exzellenzcluster bewirbt.
www.youtube.com/watch?v=2VAm...
Verstehen, was KI-Modelle können und was nicht: RAI-Forschende Dr. Simone Schaub-Meyer im Interview
YouTube video by Technische Universität Darmstadt
www.youtube.com
January 13, 2025 at 12:18 PM
Verstehen, was KI-Modelle können – und was nicht: Interview mit @simoneschaub.bsky.social, Early-Career-Forscherin im Clusterprojekt „RAI“ (Reasonable Artificial Intelligence).
"RAI" ist eines der Projekte, mit denen sich die TUDa um einen Exzellenzcluster bewirbt.
www.youtube.com/watch?v=2VAm...
"RAI" ist eines der Projekte, mit denen sich die TUDa um einen Exzellenzcluster bewirbt.
www.youtube.com/watch?v=2VAm...
Reposted by Simone Schaub-Meyer
Want to learn about how model design choices affect the attribution quality of vision models? Visit our #NeurIPS2024 poster on Friday afternoon (East Exhibition Hall A-C #2910)!
Paper: arxiv.org/abs/2407.11910
Code: github.com/visinf/idsds
Paper: arxiv.org/abs/2407.11910
Code: github.com/visinf/idsds
December 13, 2024 at 10:10 AM
Want to learn about how model design choices affect the attribution quality of vision models? Visit our #NeurIPS2024 poster on Friday afternoon (East Exhibition Hall A-C #2910)!
Paper: arxiv.org/abs/2407.11910
Code: github.com/visinf/idsds
Paper: arxiv.org/abs/2407.11910
Code: github.com/visinf/idsds
Reposted by Simone Schaub-Meyer
Our work, "Boosting Unsupervised Semantic Segmentation with Principal Mask Proposals" is accepted at TMLR! 🎉
visinf.github.io/primaps/
PriMaPs generate masks from self-supervised features, enabling to boost unsupervised semantic segmentation via stochastic EM.
visinf.github.io/primaps/
PriMaPs generate masks from self-supervised features, enabling to boost unsupervised semantic segmentation via stochastic EM.
November 28, 2024 at 5:41 PM
Our work, "Boosting Unsupervised Semantic Segmentation with Principal Mask Proposals" is accepted at TMLR! 🎉
visinf.github.io/primaps/
PriMaPs generate masks from self-supervised features, enabling to boost unsupervised semantic segmentation via stochastic EM.
visinf.github.io/primaps/
PriMaPs generate masks from self-supervised features, enabling to boost unsupervised semantic segmentation via stochastic EM.
Reposted by Simone Schaub-Meyer
Die DFG hat Dr. Simone Schaub-Meyer ins Emmy Noether-Programm aufgenommen. Schaub-Meyer will Methoden entwickeln, die das Verständnis für weit verbreitete Modelle der #KI in der Bild-und Videoanalyse erhöhen www.tu-darmstadt.de/universitaet...
Neue Emmy Noether-Gruppe erforscht erklärbare KI für die Bild-und Videoanalyse
Die Deutsche Forschungsgemeinschaft hat Dr. Simone Schaub-Meyer in ihr Emmy Noether-Programm aufgenommen. Mit ihrer neuen Nachwuchsgruppe will sie Methoden erforschen und entwickeln, die das Verständn...
www.tu-darmstadt.de
March 4, 2024 at 1:18 PM
Die DFG hat Dr. Simone Schaub-Meyer ins Emmy Noether-Programm aufgenommen. Schaub-Meyer will Methoden entwickeln, die das Verständnis für weit verbreitete Modelle der #KI in der Bild-und Videoanalyse erhöhen www.tu-darmstadt.de/universitaet...