Christian Wolf
banner
chriswolfvision.bsky.social
Christian Wolf
@chriswolfvision.bsky.social
Principal Scientist at Naver Labs Europe, Lead of Spatial AI team. AI for Robotics, Computer Vision, Machine Learning. Austrian in France. https://chriswolfvision.github.io/www/
If you are not at the mountains today, weather wise today the valleys of Alpes are miserable with thick fog. Skiing alone today, family doing other activities. But I am Austrian, I have an obligation to the God of skiing.
December 24, 2025 at 9:28 AM
Reposted by Christian Wolf
👏 F I N A L L Y 👏
Japan is joining Horizon Europe, the EU’s flagship research and innovation programme.

Openness and international cooperation can shape a bright future for science and technology.

Through science, we can build bridges, strengthen competitiveness, and accelerate the green and digital transitions.
December 23, 2025 at 12:57 PM
Reposted by Christian Wolf
The 4th AI for Robotics workshop surfaced converging themes around embodied perception, task learning & evaluation methodologies - emphasising a shift to integrated, context-aware systems. Dive into our key takeaways #AI #Robotics #spatialAI
🥽 ➡️ tinyurl.com/bvxxcn5e
Read about the 5 common themes that emerged from the talks and discussions at the 4th edition of this international workshop.
Common themes that emerged from the talks and discussions at the 4th edition of this international workshop.
tinyurl.com
December 23, 2025 at 1:05 PM
That was a refreshingly multi-facetted view of the culture war.
December 23, 2025 at 10:15 AM
Reposted by Christian Wolf
I doubt that anything resembling genuine AGI is within reach of current AI tools—Terence Tao

mathstodon.xyz/@tao/1157223...
December 22, 2025 at 7:44 AM
Reposted by Christian Wolf
And a real scientist is always working, when skiing thinking about gradient decent on a frozen landscape. When falling, thinking about negative reward and RL. Wenn going into deep snow the first time, thinking about an out of distribution situation. All life is (machine) learning. No escape!
December 21, 2025 at 11:58 AM
The office view for the next couple of days. I will not be able to follow my Scholar Inbox a lot, but this will be compensated by Raclette and all other forms of heated French cheese.
December 21, 2025 at 10:40 AM
A CVPR paper from 2021. We were doing em dashes heavily before it was cool/uncool 😉

Kervadec et al, How Transferable are Reasoning Patterns in VQA?
arxiv.org/abs/2104.03656
December 19, 2025 at 10:35 AM
It is 2025 and for the moment it is generally frowned upon to be critical of our genius visionaries' priorities and work. But surely there is a level of long-term thinking were ideas should be considered "nuts" instead of "genius-level thinking"?

Where exactly is the threshold?

1/2
I wonder whether I should also start to talk about Dyson spheres and Kardashev II civilizations to build an image as a genius visionary. It seems to work so well.
December 18, 2025 at 10:15 PM
I wonder whether I should also start to talk about Dyson spheres and Kardashev II civilizations to build an image as a genius visionary. It seems to work so well.
December 18, 2025 at 11:21 AM
Me: to assign CVPR'26 reviewers, let's not take the first ranks in openreview. Let's recommend well-known and highly competent people. They will not be assigned to the paper anyway, since they got so many recommendations.

Openreview: Hold my beer 🍺

For one papers I have a sum(reviewer-H) = 304
December 18, 2025 at 6:02 AM
Reposted by Christian Wolf
Preprint now on ArXiv 📢
The N-Body Problem: Parallel Execution from Single-Person Egocentric Video
Input: Single-person egocentric video 👤
Out: imagine how these tasks can be performed faster by N > 1 people, correctly e.g. N=2 👥
📎 arxiv.org/abs/2512.11393
👀 zhifanzhu.github.io/ego-nbody/
1/4
December 15, 2025 at 2:31 PM
I am late to the game but I finally read the NeurIPS 2025 best paper on gating in LLMs, it is great.

Qiu et al.
Alibaba, U Edinburg, Stanford, MIT, Tsinghua U
arxiv.org/abs/2505.06708

1/3
December 15, 2025 at 4:42 PM
Reposted by Christian Wolf
Super happy and honored to share that our paper "BSP-OT: Sparse transport plans between discrete measures in log-linear time" won a *Best paper award* at SIGGRAPH Asia 2025!

If you are here, come see my presentation about this work Wednesday afternoon!

Many thanks to the award committee!
December 15, 2025 at 3:31 AM
Reposted by Christian Wolf
“Why AGI Will Not Happen” by Tim Dettmers.

timdettmers.com/2025/12/10/w...

This essay is worth reading. Discusses diminishing returns (and risks) of scaling. The contrast between West and East: “Winner takes all” approach of building the biggest thing vs a long-term focus on practicality.
December 14, 2025 at 3:04 AM
Started on the plane and finished in 24h. That was incredible. Now looking forward to watching the movie.
December 13, 2025 at 11:27 AM
Reposted by Christian Wolf
LLMs didn’t move language modeling research from linguists to AI people, they just moved it from computer scientists who thought language was interesting to computer scientists who thought language was boring
December 12, 2025 at 7:38 PM
Reposted by Christian Wolf
Openreview opened the door to continuous and major revisions that nobody has time to check properly.
I think that we should come back to short one pdf page replies to reviews. It would mean having decisions quicker so that we actually have time to work on papers before resubmitting them.
December 12, 2025 at 6:55 AM
Ok, I have a strong opinion on this, perhaps you can convince me otherwise: if you need an LLM to understand a paper for reviewing, perhaps you should not be a reviewer.

And your argument is saved time, then perhaps you planned to not read all the details. Then again, you should not be a reviewer.
December 12, 2025 at 5:36 AM
Reposted by Christian Wolf
The US social media vetting for visas will be devastating for scientific and journalistic conferences, fellowships etc. No global organisation can seriously consider holding an international conference in the US while this policy exists.
December 11, 2025 at 5:35 AM
Reposted by Christian Wolf
Can vision transformers learn without images?🤔👀
Our latest work shows that pretraining ViTs on procedural symbolic data (eg sequences of balanced parentheses) makes subsequent standard training (eg on ImageNet) more data efficient! How is this possible?! ⬇️🧵
December 10, 2025 at 11:06 AM
Reposted by Christian Wolf
I am waiting for world models becoming so good that people will finally dump IL and go back to RL, as god intended us to do.
December 10, 2025 at 5:29 AM
I thought I lived close to a metropolitan area (Lyon) but we still had deer in our garden who frightened our cat and chased it away ...
December 9, 2025 at 2:28 AM