Belongie Lab
banner
belongielab.org
Belongie Lab
@belongielab.org
Computer Vision & Machine Learning
📍 Pioneer Centre for AI, University of Copenhagen
🌐 https://www.belongielab.org
Thank you for a great talk and interesting discussion Ching Lam!
August 13, 2025 at 10:53 AM
Peter Michael, Zekun Hao, Serge Belongie, and Abe Davis, “Noise-Coded Illumination for Forensic and Photometric Video Analysis,” ACM Transactions on Graphics, 2025.

NCI project page: peterfmichael.com/nci (2/2)
July 30, 2025 at 3:56 PM
Andrew and his coauthors won this test-of-time-prize for their 2015 CVPR paper “Going Deeper with Convolutions” arxiv.org/abs/1409.4842 (2/2)
Going Deeper with Convolutions
We propose a deep convolutional neural network architecture codenamed "Inception", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Sca...
arxiv.org
June 13, 2025 at 7:56 PM
We aspire to cultivate the next generation of AI researchers — those who will be the Keplers and Galileos of artificial intelligence. Researchers who will transform empirical mystery into theoretical clarity, and in doing so, redefine the foundations of intelligence itself. (7/7)
May 9, 2025 at 10:30 PM
Like Brahe, we are navigating a universe of patterns, charting phenomena whose significance may only become clear in hindsight. Our ambition is to go beyond observation. (6/7)
May 9, 2025 at 10:30 PM
We believe AI today is at a similar juncture. We are observing extraordinary capabilities in large-scale models — emergent behaviors, generalization across modalities, alignment challenges — but we lack an elegant theory to explain what we see. (5/7)
May 9, 2025 at 10:30 PM
Brahe didn’t formulate the laws of planetary motion, nor did he have the mathematical tools to do so. But through relentless observation and accurate measurement, he assembled a foundation of data that enabled the breakthroughs of Kepler and Galileo. (4/7)
May 9, 2025 at 10:30 PM
In front of our building stands a statue of Tycho Brahe — a tribute not just to Denmark’s scientific heritage, but to the spirit of inquiry that defines our field today (3/7)
May 9, 2025 at 10:30 PM
You can read more about the old observatory (the current home of P1) in this Wikipedia article (2/7)
Østervold Observatory - Wikipedia
en.m.wikipedia.org
May 9, 2025 at 10:30 PM
Menglin is now a research scientist at Meta in NYC (5/5)
May 8, 2025 at 6:17 AM
Joint work with Luming Tang, Bor-Chun Chen, Claire Cardie, Serge Belongie, Bharath Hariharan, Ser-Nam Lim. Link to code & paper: github.com/KMnP/vpt
GitHub - KMnP/vpt: ❄️🔥 Visual Prompt Tuning [ECCV 2022] https://arxiv.org/abs/2203.12119
❄️🔥 Visual Prompt Tuning [ECCV 2022] https://arxiv.org/abs/2203.12119 - KMnP/vpt
github.com
May 8, 2025 at 6:17 AM
On 24 downstream tasks spanning different domains, VPT beat all other transfer learning baselines, even surpassing full fine-tuning in 20 cases, while maintaining the advantage of storing significantly fewer parameters (less than 1% of backbone parameters) for each task. (3/5)
May 8, 2025 at 6:17 AM
What is the best way to adapt large pre-trained vision models to downstream tasks in terms of effectiveness and efficiency? Drawing inspiration from the recent advances on prompting in NLP, Menglin and colleagues proposed a simple and efficient method: Visual Prompt Tuning (VPT) (2/5)
May 8, 2025 at 6:17 AM