Neha Balamurugan
nbalamur.bsky.social
Neha Balamurugan
@nbalamur.bsky.social
We examined the reasoning text produced by humans and models. Models refer to pose far more often than gaze, except under chain-of-thought prompting, which pushes them toward more balanced, human-like reasoning patterns.
November 13, 2025 at 3:09 AM
We found that models often rely on simple cues such as guessing near a player or near the image center to solve the task.
November 13, 2025 at 3:09 AM
We find that humans outperform all models (Gemini, GPT, LLaMA, Qwen) across all prompting strategies. Accuracy is 2–3X higher for humans, and the Wasserstein distances show that models’ guess distributions are not similar to human distributions.
November 13, 2025 at 3:08 AM
Contributions of the work:
1️⃣ Spot The Ball task with human baselines
2️⃣ Large dataset including soccer, volleyball, and basketball images
3️⃣ Scalable image-generation pipeline for any sport with a ball
November 13, 2025 at 3:08 AM
In Spot the Ball, the goal is to infer the location of a removed ball from a sports frame. This task evaluates a model’s ability to localize a hidden object through reasoning over social and physical contextual cues such as players’ gaze, body orientation, and position.
November 13, 2025 at 3:08 AM
🧠⚽️🏀🏐 Preprint Alert!!
We built the Spot The Ball benchmark to test visual social inference – the ability to infer missing information from others’ behavior – in Vision Language Models.

Try the task yourself here: nehabalamurugan.com/spot-the-bal...
November 13, 2025 at 3:06 AM
We then examined the reasoning text produced by humans and models to learn that models reference pose far more often than gaze, except under chain-of-thought prompting, which pushes them toward more balanced, human-like reasoning patterns.
November 6, 2025 at 7:46 PM
We found that models often rely on simple cues such as guessing near a player or near the image center to solve the task.
November 6, 2025 at 7:45 PM
Humans outperform all models (Gemini, GPT, LLaMA, Qwen) across all prompting strategies. Accuracy is 2–3X higher for humans, and the Wasserstein distances show that models’ guess distributions are not similar to human distributions.
November 6, 2025 at 7:45 PM
Contributions of the work:
1️⃣ Spot The Ball task with human baselines
2️⃣ Large dataset including soccer, basketball, and volleyball images
3️⃣ Scalable image-generation pipeline for any sport with a ball
November 6, 2025 at 7:44 PM
In Spot the Ball, the task is to infer the location of a removed ball from a sports frame. Models and humans output a cell selection of the location of the ball as well as a reason for their selection.
November 6, 2025 at 7:43 PM
Come chat! 🎤
I'll be presenting this work at #CogSci2025:
📍 Poster Number P1-B-8
🗓️ Poster Session: Poster Session 1
🧠 Poster title: “Spot the Ball: Evaluating Visual Causal Inference in VLMs under Occlusion”
July 28, 2025 at 9:48 PM
We also built:
✅ An inpainting-based image generation pipeline
✅ A public demo where you can test your visual inference skills
✅ A dataset of 3000+ labeled soccer images for future work
July 28, 2025 at 9:46 PM
Results:
Humans outperform all models—even with chain-of-thought scaffolding.
GPT-4o gets closer with explicit pose/gaze cues, but still falls short in many cases.
July 28, 2025 at 9:46 PM
The task is mapped to a 6×10 grid → a 60-class classification problem.
We benchmark humans and models (GPT-4o, Gemini, LLaMA, Qwen) on soccer, basketball, and volleyball.
July 28, 2025 at 9:45 PM
In high-stakes, real-world scenes, humans infer what's missing, a crucial skill in driving, robotics, and sports.
We isolate this in a simple but rich task: spot the masked ball from a single frame.
July 28, 2025 at 9:43 PM
The Spot the Ball game has been around for decades.
🗓️ It began in the UK in the 1970s as a popular newspaper contest
👥 At its peak, over 3 million people played weekly
Players had to guess where the ball had been removed from a photo—just like our benchmark does today.
July 28, 2025 at 9:42 PM
🧠⚽ Spot the ball! New benchmark for visual scene understanding!
We ask: Can people and models locate a hidden ball in sports images using only visual context and reasoning?
🕹️ Try the task: v0-new-project-9b5vt6k9ugb.vercel.app
#CogSci2025
July 28, 2025 at 9:41 PM