Neha Balamurugan
nbalamur.bsky.social
Neha Balamurugan
@nbalamur.bsky.social
I led this work with Sarah Wu, Adam Chun, Gabe Gaw, Cristóbal Eyzaguirre, and Professor Tobias Gerstenberg.

🧩 Website : nehabalamurugan.com/spot-the-bal...
📊 Dataset: huggingface.co/datasets/neh...
📄 Preprint: arxiv.org/abs/2511.00261
Spot The Ball: A Benchmark for Visual Social Inference
A new benchmark for evaluating visual social reasoning in VLMs using sports scenes.
nehabalamurugan.com
November 13, 2025 at 3:10 AM
Our goal is to probe whether models possess the social understanding to infer hidden states from body orientation, gaze, and contextual cues that humans naturally exploit to motivate innovations towards this capacity.
November 13, 2025 at 3:10 AM
We examined the reasoning text produced by humans and models. Models refer to pose far more often than gaze, except under chain-of-thought prompting, which pushes them toward more balanced, human-like reasoning patterns.
November 13, 2025 at 3:09 AM
We found that models often rely on simple cues such as guessing near a player or near the image center to solve the task.
November 13, 2025 at 3:09 AM
We find that humans outperform all models (Gemini, GPT, LLaMA, Qwen) across all prompting strategies. Accuracy is 2–3X higher for humans, and the Wasserstein distances show that models’ guess distributions are not similar to human distributions.
November 13, 2025 at 3:08 AM
Contributions of the work:
1️⃣ Spot The Ball task with human baselines
2️⃣ Large dataset including soccer, volleyball, and basketball images
3️⃣ Scalable image-generation pipeline for any sport with a ball
November 13, 2025 at 3:08 AM
In Spot the Ball, the goal is to infer the location of a removed ball from a sports frame. This task evaluates a model’s ability to localize a hidden object through reasoning over social and physical contextual cues such as players’ gaze, body orientation, and position.
November 13, 2025 at 3:08 AM
I led this work supported dearly by Sarah Wu, Adam Chun, Gabe Gaw, Cristóbal Eyzaguirre, and Professor Tobias Gerstenberg.
November 6, 2025 at 7:51 PM
Our goal with this work is to motivate progress in social inference for AI. We hope this benchmark motivates architectural innovations that help models understand social information as robustly, if not better, than humans do to allow safe deployment in human-AI contexts.
November 6, 2025 at 7:47 PM
We then examined the reasoning text produced by humans and models to learn that models reference pose far more often than gaze, except under chain-of-thought prompting, which pushes them toward more balanced, human-like reasoning patterns.
November 6, 2025 at 7:46 PM
We found that models often rely on simple cues such as guessing near a player or near the image center to solve the task.
November 6, 2025 at 7:45 PM
Humans outperform all models (Gemini, GPT, LLaMA, Qwen) across all prompting strategies. Accuracy is 2–3X higher for humans, and the Wasserstein distances show that models’ guess distributions are not similar to human distributions.
November 6, 2025 at 7:45 PM
Contributions of the work:
1️⃣ Spot The Ball task with human baselines
2️⃣ Large dataset including soccer, basketball, and volleyball images
3️⃣ Scalable image-generation pipeline for any sport with a ball
November 6, 2025 at 7:44 PM
This task evaluates a model’s ability to localize a hidden object through reasoning over social and physical contextual cues such as players’ gaze, body orientation, and spatial positioning, rather than relying on direct visual evidence of the object itself in addition to sport specific knowledge.
November 6, 2025 at 7:44 PM
In Spot the Ball, the task is to infer the location of a removed ball from a sports frame. Models and humans output a cell selection of the location of the ball as well as a reason for their selection.
November 6, 2025 at 7:43 PM
Come chat! 🎤
I'll be presenting this work at #CogSci2025:
📍 Poster Number P1-B-8
🗓️ Poster Session: Poster Session 1
🧠 Poster title: “Spot the Ball: Evaluating Visual Causal Inference in VLMs under Occlusion”
July 28, 2025 at 9:48 PM
We also built:
✅ An inpainting-based image generation pipeline
✅ A public demo where you can test your visual inference skills
✅ A dataset of 3000+ labeled soccer images for future work
July 28, 2025 at 9:46 PM
Results:
Humans outperform all models—even with chain-of-thought scaffolding.
GPT-4o gets closer with explicit pose/gaze cues, but still falls short in many cases.
July 28, 2025 at 9:46 PM
Three prompt types, increasing in reasoning complexity:
🔹 Basic: “Which grid cell contains the ball?”
🔹 Implicit: Encourages attention to pose/gaze
🔹 Chain-of-thought: Step-by-step inference
July 28, 2025 at 9:45 PM
The task is mapped to a 6×10 grid → a 60-class classification problem.
We benchmark humans and models (GPT-4o, Gemini, LLaMA, Qwen) on soccer, basketball, and volleyball.
July 28, 2025 at 9:45 PM
In high-stakes, real-world scenes, humans infer what's missing, a crucial skill in driving, robotics, and sports.
We isolate this in a simple but rich task: spot the masked ball from a single frame.
July 28, 2025 at 9:43 PM
The Spot the Ball game has been around for decades.
🗓️ It began in the UK in the 1970s as a popular newspaper contest
👥 At its peak, over 3 million people played weekly
Players had to guess where the ball had been removed from a photo—just like our benchmark does today.
July 28, 2025 at 9:42 PM