Jim RB
banner
jbohnslav.bsky.social
Jim RB
@jbohnslav.bsky.social
computer vision + machine learning. Perception at Zoox. Prev: Cobot, PhD. Arxiv every day.
Pinned
Hot take: We should stop calling image+text->text models 'MLLMs' because true multimodal models like Gemini Flash 2.0 are a different beast entirely.

Here's my deep dive into the naming debate with citations from 15+ recent papers (link in thread)
ZEBRA-CoT

Dataset for vision-language reasoning where the model *generates images during the CoT*. Example: for geometry problems, it's helpful to draw lines in image space.

182K CoT labels: math, visual search, robot planning, and more.

Only downside: cc-by-nc license :(
July 24, 2025 at 1:01 PM
Franca: Nested Matryoshka Clustering for Scalable Visual Representation Learning
arxiv: arxiv.org/abs/2507.14137
code: github.com/valeoai/Franca
Franca: Nested Matryoshka Clustering for Scalable Visual Representation Learning
We present Franca (pronounced Fran-ka): free one; the first fully open-source (data, code, weights) vision foundation model that matches and in many cases surpasses the performance of state-of-the-art...
arxiv.org
July 23, 2025 at 12:17 PM
Cool technique: RASA, Removal of Absolute Spatial Attributes. They decode grid coords and find the plane in feature space that encodes position. They basically subtract this off, baking it into the last linear layer to leave the forward pass unchanged.
July 23, 2025 at 12:17 PM
Beats or is competitive to SigLIP/2, DinoV2 on linear eval, OOD detection, linear segmentation.
July 23, 2025 at 12:17 PM
Franca

Fully open vision encoder. Masks image, encodes patches, then trains student to match teacher's clusters. Key advance: Matryoshka clustering. Each slice of the embedding gets its own projection head and clustering objective. Fewer features == fewer clusters to match.
July 23, 2025 at 12:17 PM
VRU-Accident: A Vision-Language Benchmark for Video Question Answering and Dense Captioning for Accident Scene Understanding
arxiv: arxiv.org/abs/2507.098...
project: vru-accident.github.io
VRU-Accident: A Vision-Language Benchmark for Video Question Answering and Dense Captioning for Accident Scene Understanding
Ensuring the safety of vulnerable road users (VRUs), such as pedestrians and cyclists, is a critical challenge for autonomous driving systems, as crashes involving VRUs often result in severe or fatal...
arxiv.org
July 15, 2025 at 3:13 PM
VRU-Accident

New benchmark of 1K videos, 1K captions, and 6K MCQs from accidents involving VRUs. Example: "why did the accident happen?" "(B): pedestrian moves or stays on the road."

Current VLMs get ~50-65% accuracy, much worse than humans (95%).
July 15, 2025 at 3:13 PM
Side note: I've always liked Pali/Gemma's Prefix-LM masking. Why have causal attention for image tokens?
July 15, 2025 at 1:56 PM
BlindSight

AMD paper: they find attention heads often have stereotyped sparsity patterns (e.g. only attending within an image, not across). They generate sparse attention variants for each prompt. Theoretically saves ~35% FLOPs for 1-2% worse on benches.
July 15, 2025 at 1:56 PM
Long-RL

Nvidia paper scaling RL to long videos. First trains with SFT on a synthetic long CoT dataset, then does GRPO with up to 512 video frames. Uses cached image embeddings + sequence parallelism, speeding up rollouts >2X.

Bonus: code is already up!
July 11, 2025 at 1:18 PM
They identify entropy of "wait" or "alternatively" to be strongly correlated with MMMU. Neat!
July 9, 2025 at 3:41 PM
Fine-tuning the connector at the end gives a point or two of MMMU. I wonder how much of this is benchmaxxing--I haven't seen an additional SFT stage after RL before.
July 9, 2025 at 3:41 PM
They construct their warm-start SFT data with synthetic traces from Skywork-R1V2.

GRPO is pretty standard, interesting that they just did math instead of math, grounding, other possible RLVR tasks. Qwen-2.5-Instruct 32B to judges the accuracy of the answer in addition to rule-based verification.
July 9, 2025 at 3:41 PM
Skywork-R1V3: new reasoning VLM with 76% MMMU.

InternViT-6B stitched with QwQ-32B. SFT warmup, GRPO on math, then a small SFT fine-tune at the end.

Good benches, actual ablations, and interesting discussion.

Details: 🧵
July 9, 2025 at 3:41 PM
High-Resolution Visual Reasoning via Multi-Turn Grounding-Based Reinforcement Learning
arxiv: arxiv.org/abs/2507.05920
code: github.com/EvolvingLMMs...
High-Resolution Visual Reasoning via Multi-Turn Grounding-Based Reinforcement Learning
State-of-the-art large multi-modal models (LMMs) face challenges when processing high-resolution images, as these inputs are converted into enormous visual tokens, many of which are irrelevant to the ...
arxiv.org
July 9, 2025 at 3:24 PM
Training: use verl with vLLM for rollouts. Limit image resolution to 1280 visual tokens. Train on 32 H100s.

Results: +18 points better on V* compared to Qwen2.5-VL, and +5 points better than GRPO alone.
July 9, 2025 at 3:24 PM
RL: GRPO. Reward: only correct answer, not valid grounding coordinates. Seems weird to not add that though.

Data: training subset of MME-RealWorld. Evaluate on V*.
July 9, 2025 at 3:24 PM
Uses Qwen2.5-VL as a base model. The NaViT encoder makes it easy to have many images of different shapes.

They use a SFT warm-start, as the VLMs struggled to output good grounding coordinates. They constructed two-turn samples for this.
July 9, 2025 at 3:24 PM
MGPO: multi-turn grounding-based policy optimization.

I've been waiting for a paper like this! Trains the LLM to iteratively crop regions of interest to answer a question, and the only reward is the final answer.

Details in thread 👇
July 9, 2025 at 3:24 PM
DriveMRP: Enhancing Vision-Language Models with Synthetic Motion Data for Motion Risk Prediction
arxiv: arxiv.org/abs/2507.02948
code: github.com/hzy138/Drive...
DriveMRP: Enhancing Vision-Language Models with Synthetic Motion Data for Motion Risk Prediction
Autonomous driving has seen significant progress, driven by extensive real-world data. However, in long-tail scenarios, accurately predicting the safety of the ego vehicle's future motion remains a ma...
arxiv.org
July 8, 2025 at 2:03 PM
Using automatically generated risk category labels and the front-facing view, they have GPT4o caption the scenarios. The metrics are based on caption similarity + classification metrics on riskiness-type.
July 8, 2025 at 2:03 PM