Hokin
banner
hokin.bsky.social
Hokin
@hokin.bsky.social
Philosopher, Scientist, Engineer
https://hokindeng.github.io/
congratulations
November 7, 2025 at 12:22 AM
what type of pen are you using
November 7, 2025 at 12:14 AM
VMEvalKit is 100% open source. We're building this in public with everyone. Plz join us ‼️

👉 Slack: join.slack.com/t/growingail...
👉 Early Results: grow-ai-like-a-child.com/video-reason/
📄 Paper: github.com/hokindeng/VM...
👉 GitHub: github.com/hokindeng/VM...

The age of video reasoning is here 🎬🧠
GitHub - hokindeng/VMEvalKit: This is a framework for evaluating reasoning in foundational Video Models.
This is a framework for evaluating reasoning in foundational Video Models. - GitHub - hokindeng/VMEvalKit: This is a framework for evaluating reasoning in foundational Video Models.
github.com
November 4, 2025 at 11:39 PM
VMEvalKit is 100% open source. We're building this in public with everyone. Plz join us ‼️

👉 Slack: join.slack.com/t/growingail...
👉 GitHub: github.com/hokindeng/VM...
👉 Early Results: grow-ai-like-a-child.com/video-reason/
📄 Paper: github.com/hokindeng/VM...

The age of video reasoning is here 🎬🧠
GitHub - hokindeng/VMEvalKit: This is a framework for evaluating reasoning in foundational Video Models.
This is a framework for evaluating reasoning in foundational Video Models. - GitHub - hokindeng/VMEvalKit: This is a framework for evaluating reasoning in foundational Video Models.
github.com
November 4, 2025 at 10:01 PM
While failure cases clearly show idiosyncratic patterns 🧩🤔, we currently lack a principled framework to systematically analyze or interpret them 🔍. We invite everyone to explore these examples 🧪, as they may offer valuable clues for future research directions 💡🧠🚀.
November 4, 2025 at 9:56 PM
Here is a generated video for solving the Raven's Matrices from video models. For more, checkout grow-ai-like-a-child.com/video-reason/
November 4, 2025 at 9:55 PM
Raven's Matrices is the one of standard tasks in testing IQ in humans, which require subjects to find patterns and regularities. Intriguingly, video models are able to solve them quite well !
November 4, 2025 at 9:53 PM
Here is an example of testing mental rotation in video models. For more, checkout grow-ai-like-a-child.com/video-reason/
November 4, 2025 at 9:52 PM
For testing mental rotation, we give them an {n}-voxel structure with some tilted camera views (20-40° elevation) and ask them to horizontally rotate with exactly 180° azimuth change. The hard part is 1) don't deform 2) rotate the right degree. Interesting, some models are able to do it quite well.
November 4, 2025 at 9:52 PM
Here is a video example. For more, checkout grow-ai-like-a-child.com/video-reason/
November 4, 2025 at 9:49 PM
For the Sudoku problems, the video models need to fill the gap with the correct number in order to have each row and column all have 1, 2, 3. Surprisingly, this is the easiest task for video model.
November 4, 2025 at 9:49 PM
Here is an example of generated video from the models solving the maze problem. Checkout more at grow-ai-like-a-child.com/video-reason/
November 4, 2025 at 9:48 PM
In the maze problems, video models need to generate videos where navigate the green dot 🟢 to the red flags 🚩 . And they are also able to do it quite well ~
November 4, 2025 at 9:48 PM
Here is a generated video for solving the Chess problem. For more examples, checkout: grow-ai-like-a-child.com/video-reason/
November 4, 2025 at 9:45 PM
Let's see some examples. Video models are able to figure out what are the checkmate moves in the following problems.
November 4, 2025 at 9:45 PM
Idiosyncratic behavioral patterns exist.

For example, Sora-2 somehow figures out how to solve Chess problems. But all other models do not have such ability.

Veo 3 and 3.1 actually are able to do mental rotation quite well, but really fail on the maze problems.
November 4, 2025 at 9:44 PM
Tasks also exhibit clear difficulty hierarchy, with Sudoku being the easiest and mental rotation being the hardest, across all models.
November 4, 2025 at 9:38 PM
Models exhibit clear performance hierarchy with Sora-2 currently being the best model.
November 4, 2025 at 9:37 PM
The basic of VMEvalkit is a Task Pair unit:

1️⃣ Initial image: unsolved puzzle
2️⃣ Text instruction: “Solve this ...”
3️⃣ Final image: correct solution (hidden during generation)

Models see (1)+(2), we compare their output to (3). Simple and straight-forward ✅
November 4, 2025 at 9:36 PM
Our paper is now available at arxiv.org/abs/2510.20835. For anyone interested, we’d love to hang out and chat 💬🧃

#EmbodiedAI #SpatialReasoning #NeuroAI #CognitiveScience #SpatialReasoning
Rethinking the Simulation vs. Rendering Dichotomy: No Free Lunch in Spatial World Modelling
Spatial world models, representations that support flexible reasoning about spatial relations, are central to developing computational models that could operate in the physical world, but their precise mechanistic underpinnings are nuanced by the borrowing of underspecified or misguided accounts of human cognition. This paper revisits the simulation versus rendering dichotomy and draws on evidence from aphantasia to argue that fine-grained perceptual content is critical for model-based spatial reasoning. Drawing on recent research into the neural basis of visual awareness, we propose that spatial simulation and perceptual experience depend on shared representational geometries captured by higher-order indices of perceptual relations. We argue that recent developments in embodied AI support this claim, where rich perceptual details improve performance on physics-based world engagements. To this end, we call for the development of architectures capable of maintaining structured perceptual representations as a step toward spatial world modelling in AI.
arxiv.org
November 3, 2025 at 12:16 AM
Third, in embodied AIs, explicit simulators (MuJoCo/Isaac/Genesis) are vital but brittle alone. Implicit world models (VIP, R3M, visual pretraining) supply perceptual structure that boosts generalization, long-horizon planning, and sim-to-real.
November 3, 2025 at 12:16 AM
However, it's necessary that visual and spatial mental content co-construct conscious experiences rather than run on isolated tracks.
November 3, 2025 at 12:16 AM
Second, it makes to sound like the dorsal stream, where the "mujoco" software of our brain lies, almost becomes a "zombie" stream, namely with no participation of our conscious experience.
November 3, 2025 at 12:15 AM