Serge Belongie
banner
serge.belongie.com
Serge Belongie
@serge.belongie.com

Professor, University Of Copenhagen 🇩🇰 PI @belongielab.org 🕵️‍♂️ Director @aicentre.dk 🤖 President @ellis.eu 🇪🇺 Formerly: Cornell, Google, UCSD

#ComputerVision #MachineLearning

Serge Belongie is a professor of Computer Science at the University of Copenhagen, where he also serves as the head of the Danish Pioneer Centre for Artificial Intelligence. Previously, he was the Andrew H. and Ann R. Tisch Professor of Computer Science at Cornell Tech, where he also served as Associate Dean. He has also been a member of the Visiting Faculty program at Google. He is known for his contributions to the fields of computer vision and machine learning, specifically object recognition and image segmentation, with his scientific research in these areas cited over 150,000 times according to Google Scholar. Along with Jitendra Malik, Belongie proposed the concept of shape context, a widely used feature descriptor in object recognition. He has co-founded several startups in the areas of computer vision and object recognition. .. more

Computer science 91%
Engineering 7%

Circa 2026 I estimate the above SC-TPS-KNN experiments would take around 5 min. to run on a single high-end workstation, or around 5 sec. to run on a modern cloud cluster. (3/3)

For a brief shining moment, I beat the state of the art LeNet approach, right as I was interviewing for faculty jobs. Not long after that, Schölkopf et al. beat me with an SVM-based method. Kernels and boosting were huge for ~a decade after that, but after 2012... well, everything changed! (2/3)

The main experiments from my PhD thesis took around 2 weeks to run on Berkeley's Millennium cluster (30 Sun Ultra machines) circa 1999. With that compute muscle, Jitendra, Jan, and I were able to run Shape Context matching with Thin-Plate Spline alignment followed by K-NN on the MNIST dataset. (1/3)

The “memory wall,” which refers to the increasing gap between processor speed and memory bandwidth since ~2019, is one of the main drivers of Sebastian’s PhD studies on structured learning under memory constraints sebulo.github.io (2/2)
Sebastian Loeschcke
sebulo.github.io

A factor of 10 billion since 2010 😮

A couple of eye-opening slides form @sloeschcke.bsky.social's presentation at today’s @belongielab.org meeting (1/2)

Reposted by Serge Belongie

The #ECCV2026 Malmo 🇸🇪 call for papers is now available. Check it out!

Call for Papers: eccv.ecva.net/Conferences/...

Manohar Paluri (Meta) talks about the SAM family of models for images, video, and audio at Day 2 of Machines Can Think 2026

machinescanthink.ai

Yann’s new startup AMI Labs will be headquartered in Paris, offices in Montreal, New York, and Singapore

@yann-lecun.bsky.social beams in from a galaxy far, far away to deliver a keynote at Machines Can Think 2026

“Future AI systems will be judged not by what they can already do, but by what new tasks they can learn quickly. This requires an understanding of the world.”

Sergey used the task of tying one’s shoes to showcase the interplay between words and pictures: “Edsger Dijkstra once remarked, ‘A picture may be worth a thousand words, a formula is worth a thousand pictures.’ Ok, Dijkstra, what’s the formula for tying your shoes?”

Machines Can Think 2026 is off to a great start with excellent keynotes by Sergey Tulyakov ( @snapinc.bsky.social) and @misovalko.bsky.social (Stealth Startup)

machinescanthink.ai

Reposted by Serge Belongie

ELLIS @ellis.eu · 20d
👋 Meet the ELLIS Board!

This episode features @lawrennd.bsky.social, DeepMind Professor of ML at @cam.ac.uk 🇬🇧.

He shares perspectives on AGI timelines, data vs. algorithms, and why progress in AI should be judged by its impact on people and society.

Watch the video 👉 youtu.be/uxtVA5fMQZQ
Meet the Board: Neil Lawrence (ELLIS Board Member | DeepMind Professor, University of Cambridge)
YouTube video by ELLIS
youtu.be

The attached plot (h/t Nano Banana) illustrates this idea with a handful of examples.

If you're reading this and you've got something cooking, where does it fall on this plot?

(6/6)

The LFG scale also ranges from 1 to 9, and represents organizational will, from low-stakes exploration (LFG 1) to Manhattan Project intensity (LFG 9).

If TRL is the potential energy, then LFG is the kinetic energy.

(5/6)

The TRL (Technology Readiness Level) scale ranges from 1 to 9, and captures a technology's maturity, from basic research (TRL 1) to proven, operational use (TRL 9). It's a useful ladder when talking about innovation, but I propose that we combine it with a new LFG (Let's F@&king Go!) scale.

(4/6)

During the train ride home, I noodled on an idea that you might find useful for mapping innovation and hustle. Those who attended the ELLIS Institute Finland launch event might recall Max Welling and me riffing on this idea during a fireside chat with Peter Sarlin 🚀

(3/6)

I attended the WASP – Wallenberg AI, Autonomous Systems and Software Program Winter Conference in Örebro earlier this week, and came away from it inspired by the excellent research from the students in their PhD school and the real world impact of their alumni in industry and startups.

(2/6)

The TRL vs. LFG Matrix: Mapping Tech Maturity & Organizational Will

(1/6)

“AI is everywhere, but it is not everything”

Amy Loutfi kicks off the 2026 WASP – Wallenberg AI, Autonomous Systems and Software Program Winter Conference in Örebro

Reposted by Serge Belongie

FGVC's not dead!

The 13th Workshop on Fine-Grained Visual Categorization has been accepted to CVPR 2026, in Denver, Colorado!

CALL FOR PAPERS: sites.google.com/view/fgvc13/

From Ecology to Medical Imagining, join us as we tackle the long tail and the limits of visual discrimination! #CVPR2026 #AI

Reposted by Serge Belongie

ELLIS @ellis.eu · Jan 7
🎬 EurIPS & ELLIS UnConference aftermovie - Relive the first @euripsconf.bsky.social in Copenhagen, a community-driven, NeurIPS-endorsed milestone for Europe’s AI community.

👉 www.youtube.com/watch?v=8PaC...

📖 More: ellis.eu/news/eurips-...

⏰ Last call to host #EurIPS2026: ellis.eu/news/eurips-...
EurIPS & ELLIS UnConference 2025 - Copenhagen 🇩🇰
YouTube video by ELLIS
www.youtube.com

Reposted by Serge Belongie

We have two open PhD positions at the interface of AI and ecology. Start dates are Sept 2026.

We are looking for candidates with a background in AI/CS, Math, Stats, or Physics that are passionate about solving challenging problems in these domains. 

Application deadline is in two weeks.
📢Please share📢 We have an opening for an exciting fully-funded PhD project on computer vision and machine learning applied to biodiversity monitoring with amazing Serge Belongie @belongielab.org and @aicentre.dk. Application deadline coming up on 15 January!
phd.tech.au.dk/for-applican...
Harnessing the power of AI for biodiversity monitoring with camera trap networks - From foundation model to edge processing
phd.tech.au.dk

The multiparty meeting booking problem is at least as hard as autonomous driving, modulo the risk of bodily injury.

4. Who’s sacrificing by making time slots available outside regular work hours/during family time?
5. Did anyone reply to the poll request with a calendly link?

1. Who made & sent the doodle poll?
2. Who didn’t respond to the poll?
3. Did anyone sidestep the poll and reply “my schedule is complicated; we’ll figure something out?”

Ponder these questions next time you find yourself in a meeting scheduling thread (not counting within-org booking or people with PAs):

My conclusion: calendar booking is 1% about technology and 99% about power.

Modern GenAI has nailed the 1% part, but nothing has changed about the rest of it.

I bookmarked this post on Quora ~8 years ago when a small startup called x.ai launched a calendar booking assistant/bot named Amy Ingram. While I only tried their system for a handful of experiments, the lukewarm review below matched my impressions of its capability.

Reposted by Serge Belongie

What if we could see between the pixels? We’ve been working on a new approach to super-resolve images by combining multiple low-resolution views of the same scene.

Project page: sjyhne.github.io/superf/
Preprint: www.arxiv.org/abs/2512.09115
Demo: huggingface.co/spaces/sjyhn...

Thread [1/n] 👇