Dima Damen
banner
dimadamen.bsky.social
Dima Damen
@dimadamen.bsky.social
Professor of Computer Vision, @BristolUni. Senior Research Scientist @GoogleDeepMind - passionate about the temporal stream in our lives.
http://dimadamen.github.io
Pinned
🛑📢
HD-EPIC: A Highly-Detailed Egocentric Video Dataset
hd-epic.github.io
arxiv.org/abs/2502.04144
New collected videos
263 annotations/min: recipe, nutrition, actions, sounds, 3D object movement &fixture associations, masks.
26K VQA benchmark to challenge current VLMs
1/N
Reposted by Dima Damen
UPDATE: We've updated the download process and now you can download the videos of our HowToGround1M dataset in addition to the iGround dataset.

Also, we now provide access to the full HowTo100M dataset!

Download our datasets, or HowTo100M at github.com/ekazakos/grove
January 18, 2026 at 9:05 AM
If you missed this before NY... A reminder that our @compscibristol.bsky.social #MaVi Summer Program - for current PhD students from Europe inc. @ellis.eu unit students
and internationally is open for applications - DL 29 Jan 2026.
uob-mavi.github.io/Summer@MaVi....
Applications are open for visiting PhD @compscibristol.bsky.social @bristoluni.bsky.social in 2026 - DL 29 Jan
Would you like to work with any of the Faculty working in Machine Learning and Computer Vision #mavi as part of our summer of research at Bristol program?
uob-mavi.github.io/Summer@MaVi....
January 12, 2026 at 3:06 PM
3rd Egocentric Vision (EgoVis) workshop will be held as a full day workshop
@cvprconference.bsky.social #CVPR2026

egovis.github.io/cvpr26/

CFP and challenge deadlines after the NY

Great lineup of 7 keynote speakers...

See you in Denver!
December 23, 2025 at 2:22 PM
Call for Nominations
EgoVis 2024/2025 Distinguished Paper Awards.
Published a paper contributing to Ego Vision in 2024/25?
Innovative &advancing Ego Vision?
Worthy of a prize?
DL for nominations 20 Feb 2026
Awards announced @cvprconference.bsky.social #CVPR2026

egovis.github.io/awards/2024_...
December 18, 2025 at 10:29 AM
Great initiative... thanks to the organisers
Worried about AI’s military uses? We are too. We’re organising an ICLR 2026 workshop on AI research and military applications—dual-use risks, transparency, accountability, and ethical/legal governance & policy. Details + paper submissions: see Noa’s post and visit aiforpeaceworkshop.github.io.
December 16, 2025 at 9:54 AM
Reposted by Dima Damen
Worried about AI’s military uses? We are too. We’re organising an ICLR 2026 workshop on AI research and military applications—dual-use risks, transparency, accountability, and ethical/legal governance & policy. Details + paper submissions: see Noa’s post and visit aiforpeaceworkshop.github.io.
December 16, 2025 at 9:44 AM
Preprint now on ArXiv 📢
The N-Body Problem: Parallel Execution from Single-Person Egocentric Video
Input: Single-person egocentric video 👤
Out: imagine how these tasks can be performed faster by N > 1 people, correctly e.g. N=2 👥
📎 arxiv.org/abs/2512.11393
👀 zhifanzhu.github.io/ego-nbody/
1/4
December 15, 2025 at 2:31 PM
Applications are open for visiting PhD @compscibristol.bsky.social @bristoluni.bsky.social in 2026 - DL 29 Jan
Would you like to work with any of the Faculty working in Machine Learning and Computer Vision #mavi as part of our summer of research at Bristol program?
uob-mavi.github.io/Summer@MaVi....
December 12, 2025 at 8:53 PM
Congratulations Jacob Chalk who passed his PhD viva @compscibristol.bsky.social on
"Leveraging Multimodal Data for Egocentric Video Understanding" w no corrections
📜 in ICASSP23 CVPR24 CVPR25 3DV25 TPAMI25
jacobchalk.github.io
🙏examiners @hildekuehne.bsky.social @andrewowens.bsky.social &Wei-Hong Li
a cartoon of spongebob saying `` my work is done here '' while dancing .
ALT: a cartoon of spongebob saying `` my work is done here '' while dancing .
media.tenor.com
December 4, 2025 at 5:46 PM
Super-exciting talk by Ani Kembhavi from Wayve AI @bristoluni.bsky.social @compscibristol.bsky.social #MaVi Seminar today!
World models for evaluating autonomous driving, GAIA3 released! End-to-end driving model &loads of insights!
Thanks for visiting &spending the day talking to researchers.
December 2, 2025 at 3:26 PM
Reposted by Dima Damen
Seeing without Pixels: Perception from Camera Trajectories

Zihui Xue, Kristen Grauman @dimadamen.bsky.social Andrew Zisserman, Tengda Han

tl;dr: in title. I love such "blind baseline" papers.
arxiv.org/abs/2511.21681
December 1, 2025 at 1:37 PM
It was exciting to attend SC'25 #SC25 in #StLoius #Missouri & visit @bristoluni.bsky.social @compscibristol.bsky.social #BriCS Bristol Centre for Supercomputing (BriCS) stand showcasing our fantastic #Isambard_AI - 11th fastest supercomputer globally.
pics w @simonmcs.bsky.social and Sadaf Alam
November 20, 2025 at 8:57 PM
😮😭🤯🤒
Today is the perfect day to add the ICML deadlines, since there are no other stress factors:

Abstract: Jan 23, 2026 AoE
Paper: Jan 28, 2026 AoE
Location: Seoul, South Korea 🇰🇷
icml.cc/Conferences/...
CVPR'26 (paper): DL today, good luck (23h)!
ICML'26 (abs): 71 days.
ICML'26 (paper): 76 days.
ECCV'26: 112 days.
November 13, 2025 at 12:16 PM
Reposted by Dima Damen
Check out Leonie's (@bossemel.bsky.social) upcoming NeurIPS Datasets and Benchmarks paper about a really interesting new dataset for evaluating models of human visual learning.
From medicine to geo-guessing, humans can get incredibly good at solving visual recognition tasks.
But how is this skill learned, and can we model its progression?
We present CleverBirds, accepted #NeurIPS2025, a large-scale benchmark for visual knowledge tracing.
📄 arxiv.org/abs/2511.08512
1/5
CleverBirds: A Multiple-Choice Benchmark for Fine-grained Human Knowledge Tracing
Mastering fine-grained visual recognition, essential in many expert domains, can require that specialists undergo years of dedicated training. Modeling the progression of such expertize in humans rema...
arxiv.org
November 12, 2025 at 3:56 PM
Reposted by Dima Damen
Prof. @tokehoye.bsky.social (Aarhus University) and I have an open PhD position (jointly advised) on biodiversity monitoring with camera trap networks. Deadline: 15-Jan-2026

Please help us share this post among students you know with an interest in Machine Learning and Biodiversity! 🤖🪲🌱
November 11, 2025 at 1:12 PM
Reposted by Dima Damen
"Sliding is all you need" (aka "What really matters in image goal navigation") has been accepted to 3DV 2026 (@3dvconf.bsky.social) as an Oral presentation!

By Gianluca Monaci, @weinzaepfelp.bsky.social and myself.
@naverlabseurope.bsky.social
In a new paper led by Gianluca Monaci, with @weinzaepfelp.bsky.social and myself, we explore the relationship between rel pose estimation and image goal navigation and study different architectures: late fusion, channel cat (w/ or w/o space2depth) and cross-attention.

arxiv.org/abs/2507.01667

🧵1/5
November 6, 2025 at 6:20 AM
Reposted by Dima Damen
Have a question for a #CVPR2026 organizer? Use the form.

Form: support.conferences.computer.org/cvpr/help-desk
November 4, 2025 at 6:28 PM
🛑 New Paper
PointSt3R: Point Tracking through 3D Grounded Correspondence
arxiv.org/abs/2510.26443

Can point tracking be re-formulated as pairwise frame correspondence solely?

We fine-tuning MASt3R with dynamic correspondences and a visibility loss and achieve competitive point tracking results
1/3
October 31, 2025 at 2:14 PM
Reposted by Dima Damen
PointSt3R: Point Tracking through 3D Grounded Correspondence

R. Guerrier, @adamharley.bsky.social, @dimadamen.bsky.social
Bristol/Meta

rhodriguerrier.github.io/PointSt3R/
October 31, 2025 at 9:22 AM
Reposted by Dima Damen
📢New in ScanNet++: High-Res 360° Panos!

Chandan Yeshwanth and Yueh-Cheng Liu have added pano captures for 956 ScanNet++ scenes, fully aligned with the 3D meshes, DSLR, and iPhone data - multiple panos per scene

Check it out:
Docs kaldir.vc.in.tum.de/scannetpp/do...
Code github.com/scannetpp/sc...
October 30, 2025 at 4:09 PM
Special thanks to @elliottwu.bsky.social for visiting
@bristoluni.bsky.social to give a #MaVi for a seminar: From Pixels to 3D Motion
We enjoyed your visit! Thanks for staying through for all 1-1s with the researchers.
October 27, 2025 at 4:24 PM
Reposted by Dima Damen
The ELLIS Society welcomes its new Board. As the primary decision-making body, it will play a vital role in shaping the future of ELLIS and advancing #AI and #MachineLearning research across Europe in a time of global change.

Read the full article: ellis.eu/news/ellis-s...
October 27, 2025 at 9:15 AM
Reposted by Dima Damen
Nice writeup in @caltech.edu news about the impact of the #Visipedia project in Computer Vision and Citizen Science
Pietro Perona's Vision: Visipedia and Its Lasting Impact on Computer Vision
The machine learning-driven system for identifying visual information has grown the citizen-science apps Merlin and iNaturalist, led to the development of key datasets, and jump-started the field of i...
www.eas.caltech.edu
October 24, 2025 at 9:48 PM
Reposted by Dima Damen
We have a new sequence model for robotics, which will be presented at #NeurIPS2025:

Kinaema: A recurrent sequence model for memory and pose in motion
arxiv.org/abs/2510.20261

By @mbsariyildiz.bsky.social, @weinzaepfelp.bsky.social, G. Bono, G. Monaci and myself
@naverlabseurope.bsky.social

1/9
October 24, 2025 at 7:18 AM
Reposted by Dima Damen
A reminder which might be relevant now: we are looking to hire a senior research scientist in Robotics at @naverlabseurope.bsky.social in Grenoble, France.
October 23, 2025 at 6:19 PM