Robin Hesse
@robinhesse.bsky.social
PhD student in explainable AI for computer vision @visinf.bsky.social @tuda.bsky.social - Prev. intern AWS and @maxplanck.de
Reposted by Robin Hesse
🚨 Call for Questions! 🚨
We are inviting the community and the stakeholders to submit questions, which will be discussed with our experts at the workshop! 🎤💡
👉 Submit your questions: forms.gle/8cYb4Ce3dGHi...
Workshop: excv-workshop.github.io
@iccv.bsky.social
#ICCV2025 #eXCV
We are inviting the community and the stakeholders to submit questions, which will be discussed with our experts at the workshop! 🎤💡
👉 Submit your questions: forms.gle/8cYb4Ce3dGHi...
Workshop: excv-workshop.github.io
@iccv.bsky.social
#ICCV2025 #eXCV
September 8, 2025 at 3:54 PM
🚨 Call for Questions! 🚨
We are inviting the community and the stakeholders to submit questions, which will be discussed with our experts at the workshop! 🎤💡
👉 Submit your questions: forms.gle/8cYb4Ce3dGHi...
Workshop: excv-workshop.github.io
@iccv.bsky.social
#ICCV2025 #eXCV
We are inviting the community and the stakeholders to submit questions, which will be discussed with our experts at the workshop! 🎤💡
👉 Submit your questions: forms.gle/8cYb4Ce3dGHi...
Workshop: excv-workshop.github.io
@iccv.bsky.social
#ICCV2025 #eXCV
Reposted by Robin Hesse
Some impressions from our VISINF summer retreat at Lizumer Hütte in the Tirol Alps — including a hike up Geier Mountain and new research ideas at 2,857 m! 🇦🇹🏔️
August 29, 2025 at 12:48 PM
Some impressions from our VISINF summer retreat at Lizumer Hütte in the Tirol Alps — including a hike up Geier Mountain and new research ideas at 2,857 m! 🇦🇹🏔️
Reposted by Robin Hesse
Check out our blog post about SceneDINO 🦖
For more details, check out our project page, 🤗 demo, and the hashtag #ICCV2025 paper 🚀
🌍Project page: visinf.github.io/scenedino/
🤗Demo: visinf.github.io/scenedino/
📄Paper: arxiv.org/abs/2507.06230
@jev-aleks.bsky.social
For more details, check out our project page, 🤗 demo, and the hashtag #ICCV2025 paper 🚀
🌍Project page: visinf.github.io/scenedino/
🤗Demo: visinf.github.io/scenedino/
📄Paper: arxiv.org/abs/2507.06230
@jev-aleks.bsky.social
July 24, 2025 at 1:16 PM
Check out our blog post about SceneDINO 🦖
For more details, check out our project page, 🤗 demo, and the hashtag #ICCV2025 paper 🚀
🌍Project page: visinf.github.io/scenedino/
🤗Demo: visinf.github.io/scenedino/
📄Paper: arxiv.org/abs/2507.06230
@jev-aleks.bsky.social
For more details, check out our project page, 🤗 demo, and the hashtag #ICCV2025 paper 🚀
🌍Project page: visinf.github.io/scenedino/
🤗Demo: visinf.github.io/scenedino/
📄Paper: arxiv.org/abs/2507.06230
@jev-aleks.bsky.social
Reposted by Robin Hesse
🚨Deadline Extension Alert!
Our Non-proceedings track is open till August 15th for the eXCV workshop at ICCV.
Our nectar track accepts published papers, as is.
More info at: excv-workshop.github.io
@iccv.bsky.social #ICCV2025
Our Non-proceedings track is open till August 15th for the eXCV workshop at ICCV.
Our nectar track accepts published papers, as is.
More info at: excv-workshop.github.io
@iccv.bsky.social #ICCV2025
July 18, 2025 at 9:31 AM
🚨Deadline Extension Alert!
Our Non-proceedings track is open till August 15th for the eXCV workshop at ICCV.
Our nectar track accepts published papers, as is.
More info at: excv-workshop.github.io
@iccv.bsky.social #ICCV2025
Our Non-proceedings track is open till August 15th for the eXCV workshop at ICCV.
Our nectar track accepts published papers, as is.
More info at: excv-workshop.github.io
@iccv.bsky.social #ICCV2025
Reposted by Robin Hesse
Introducing the speakers for the eXCV workshop at ICCV, Hawaii. Get ready for many stimulating and insightful talks and discussions.
Our Non-proceedings track is still open!
Paper submission deadline: July 18, 2025
More info at: excv-workshop.github.io
@iccv.bsky.social #ICCV2025
Our Non-proceedings track is still open!
Paper submission deadline: July 18, 2025
More info at: excv-workshop.github.io
@iccv.bsky.social #ICCV2025
July 10, 2025 at 12:49 PM
Introducing the speakers for the eXCV workshop at ICCV, Hawaii. Get ready for many stimulating and insightful talks and discussions.
Our Non-proceedings track is still open!
Paper submission deadline: July 18, 2025
More info at: excv-workshop.github.io
@iccv.bsky.social #ICCV2025
Our Non-proceedings track is still open!
Paper submission deadline: July 18, 2025
More info at: excv-workshop.github.io
@iccv.bsky.social #ICCV2025
Reposted by Robin Hesse
🦖 We present “Feed-Forward SceneDINO for Unsupervised Semantic Scene Completion”. #ICCV2025
🌍: visinf.github.io/scenedino/
📃: arxiv.org/abs/2507.06230
🤗: huggingface.co/spaces/jev-a...
@jev-aleks.bsky.social @fwimbauer.bsky.social @olvrhhn.bsky.social @stefanroth.bsky.social @dcremers.bsky.social
🌍: visinf.github.io/scenedino/
📃: arxiv.org/abs/2507.06230
🤗: huggingface.co/spaces/jev-a...
@jev-aleks.bsky.social @fwimbauer.bsky.social @olvrhhn.bsky.social @stefanroth.bsky.social @dcremers.bsky.social
July 9, 2025 at 1:18 PM
🦖 We present “Feed-Forward SceneDINO for Unsupervised Semantic Scene Completion”. #ICCV2025
🌍: visinf.github.io/scenedino/
📃: arxiv.org/abs/2507.06230
🤗: huggingface.co/spaces/jev-a...
@jev-aleks.bsky.social @fwimbauer.bsky.social @olvrhhn.bsky.social @stefanroth.bsky.social @dcremers.bsky.social
🌍: visinf.github.io/scenedino/
📃: arxiv.org/abs/2507.06230
🤗: huggingface.co/spaces/jev-a...
@jev-aleks.bsky.social @fwimbauer.bsky.social @olvrhhn.bsky.social @stefanroth.bsky.social @dcremers.bsky.social
Got a strong XAI paper rejected from ICCV? Submit it to our ICCV eXCV Workshop today—we welcome high-quality work!
🗓️ Submissions open until June 26 AoE.
📄 Got accepted to ICCV? Congrats! Consider our non-proceedings track.
#ICCV2025 @iccv.bsky.social
🗓️ Submissions open until June 26 AoE.
📄 Got accepted to ICCV? Congrats! Consider our non-proceedings track.
#ICCV2025 @iccv.bsky.social
June 26, 2025 at 9:22 AM
Got a strong XAI paper rejected from ICCV? Submit it to our ICCV eXCV Workshop today—we welcome high-quality work!
🗓️ Submissions open until June 26 AoE.
📄 Got accepted to ICCV? Congrats! Consider our non-proceedings track.
#ICCV2025 @iccv.bsky.social
🗓️ Submissions open until June 26 AoE.
📄 Got accepted to ICCV? Congrats! Consider our non-proceedings track.
#ICCV2025 @iccv.bsky.social
Reposted by Robin Hesse
I will be presenting our paper on measuring non-linearity of deep neural networks @cvprconference.bsky.social
!
🔗 Project page: qbouniot.github.io/affscore_web...
Come join me on Sunday 15th June from 10:30 to 12:30, ExHall D Poster #402 #CVPR2025
!
🔗 Project page: qbouniot.github.io/affscore_web...
Come join me on Sunday 15th June from 10:30 to 12:30, ExHall D Poster #402 #CVPR2025
June 15, 2025 at 5:14 AM
I will be presenting our paper on measuring non-linearity of deep neural networks @cvprconference.bsky.social
!
🔗 Project page: qbouniot.github.io/affscore_web...
Come join me on Sunday 15th June from 10:30 to 12:30, ExHall D Poster #402 #CVPR2025
!
🔗 Project page: qbouniot.github.io/affscore_web...
Come join me on Sunday 15th June from 10:30 to 12:30, ExHall D Poster #402 #CVPR2025
Submissions for the proceedings track (regular+position papers) of our second workshop on explainable computer vision at @iccv.bsky.social in Hawaii are open until June 20, 2025.
Join us in taking stock of the state of the field of explainability in computer vision, at our Workshop on Explainable Computer Vision: Quo Vadis? at #ICCV2025!
@iccv.bsky.social
@iccv.bsky.social
June 15, 2025 at 5:46 AM
Submissions for the proceedings track (regular+position papers) of our second workshop on explainable computer vision at @iccv.bsky.social in Hawaii are open until June 20, 2025.
I'm looking forward to giving a talk at the MIV Workshop tomorrow at #CVPR2025!
We show how to improve the interpretability of a CNN by disentangling a polysemantic channel into multiple monosemantic ones - without changing the function of the CNN.
We show how to improve the interpretability of a CNN by disentangling a polysemantic channel into multiple monosemantic ones - without changing the function of the CNN.
Disentangling Polysemantic Channels in Convolutional Neural Networks
by @robinhesse.bsky.social, Jonas Fischer, @simoneschaub.bsky.social, and @stefanroth.bsky.social
Paper: arxiv.org/abs/2504.12939
Talk: Thursday 11:40 AM, Grand ballroom C1
Poster: Thursday, 12:30 PM, ExHall D, Poster 31-60
by @robinhesse.bsky.social, Jonas Fischer, @simoneschaub.bsky.social, and @stefanroth.bsky.social
Paper: arxiv.org/abs/2504.12939
Talk: Thursday 11:40 AM, Grand ballroom C1
Poster: Thursday, 12:30 PM, ExHall D, Poster 31-60
June 11, 2025 at 9:54 PM
I'm looking forward to giving a talk at the MIV Workshop tomorrow at #CVPR2025!
We show how to improve the interpretability of a CNN by disentangling a polysemantic channel into multiple monosemantic ones - without changing the function of the CNN.
We show how to improve the interpretability of a CNN by disentangling a polysemantic channel into multiple monosemantic ones - without changing the function of the CNN.
Reposted by Robin Hesse
We are thrilled to welcome an incredible lineup of invited speakers to the 4th Explainable AI for Computer Vision (XAI4CV) Workshop, held as part of #CVPR2025 — which kicks off next week, from Wednesday, June 11th to Sunday, June 15th in Nashville, TN!
June 5, 2025 at 12:59 PM
We are thrilled to welcome an incredible lineup of invited speakers to the 4th Explainable AI for Computer Vision (XAI4CV) Workshop, held as part of #CVPR2025 — which kicks off next week, from Wednesday, June 11th to Sunday, June 15th in Nashville, TN!
Reposted by Robin Hesse
📢 #CVPR2025 Highlight: Scene-Centric Unsupervised Panoptic Segmentation 🔥
We present CUPS, the first unsupervised panoptic segmentation method trained directly on scene-centric imagery.
Using self-supervised features, depth & motion, we achieve SotA results!
🌎 visinf.github.io/cups
We present CUPS, the first unsupervised panoptic segmentation method trained directly on scene-centric imagery.
Using self-supervised features, depth & motion, we achieve SotA results!
🌎 visinf.github.io/cups
April 4, 2025 at 1:38 PM
📢 #CVPR2025 Highlight: Scene-Centric Unsupervised Panoptic Segmentation 🔥
We present CUPS, the first unsupervised panoptic segmentation method trained directly on scene-centric imagery.
Using self-supervised features, depth & motion, we achieve SotA results!
🌎 visinf.github.io/cups
We present CUPS, the first unsupervised panoptic segmentation method trained directly on scene-centric imagery.
Using self-supervised features, depth & motion, we achieve SotA results!
🌎 visinf.github.io/cups
Reposted by Robin Hesse
Want to learn about how model design choices affect the attribution quality of vision models? Visit our #NeurIPS2024 poster on Friday afternoon (East Exhibition Hall A-C #2910)!
Paper: arxiv.org/abs/2407.11910
Code: github.com/visinf/idsds
Paper: arxiv.org/abs/2407.11910
Code: github.com/visinf/idsds
December 13, 2024 at 10:10 AM
Want to learn about how model design choices affect the attribution quality of vision models? Visit our #NeurIPS2024 poster on Friday afternoon (East Exhibition Hall A-C #2910)!
Paper: arxiv.org/abs/2407.11910
Code: github.com/visinf/idsds
Paper: arxiv.org/abs/2407.11910
Code: github.com/visinf/idsds