SP Arun
@sparuniisc.bsky.social
Neuroscientist at IISc Bangalore studying visual perception using 🐒🚶🏽♀️💻
https://sites.google.com/site/visionlabiisc/
https://sites.google.com/site/visionlabiisc/
Come join us for this exciting event next week. You can't possibly have anything better to do next Saturday 🙃
June 14, 2025 at 12:49 PM
Come join us for this exciting event next week. You can't possibly have anything better to do next Saturday 🙃
If visual homogeneity is a really universal computation, then it should also work for a symmetry task. Here too, we obtained exactly similar results - the visual homogeneity computation predicted symmetry responses, and the same region showed proportional activations!! 25/n
April 18, 2025 at 7:30 PM
If visual homogeneity is a really universal computation, then it should also work for a symmetry task. Here too, we obtained exactly similar results - the visual homogeneity computation predicted symmetry responses, and the same region showed proportional activations!! 25/n
This region is just anterior to the lateral occipital complex, where neural dissimilarity between images matched perceived dissimilarity 24/n
April 18, 2025 at 7:30 PM
This region is just anterior to the lateral occipital complex, where neural dissimilarity between images matched perceived dissimilarity 24/n
In the brain, we found a localized region whose brain activity is directly proportional to visual homogeneity. 23/n
April 18, 2025 at 7:30 PM
In the brain, we found a localized region whose brain activity is directly proportional to visual homogeneity. 23/n
So armed with these predictions, we collected and analyzed our data.....and lo and behold! We found that indeed, exactly as we predicted, we can find a center in perceptual space relative to which distance computations do predict oddball present/absent search 22/n
April 18, 2025 at 7:30 PM
So armed with these predictions, we collected and analyzed our data.....and lo and behold! We found that indeed, exactly as we predicted, we can find a center in perceptual space relative to which distance computations do predict oddball present/absent search 22/n
In brain activity, if there's a single brain region (region VH) that encodes this quantity, the response in this region should be directly proportional to visual homogeneity. 20/n
April 18, 2025 at 7:30 PM
In brain activity, if there's a single brain region (region VH) that encodes this quantity, the response in this region should be directly proportional to visual homogeneity. 20/n
If such a computation was actually being used by the brain, what do we expect? Well, first of all, it should act like a decision variable, which means that any stimulus close to the decision boundary will be hard to decide, and will have long response times. 19/n
April 18, 2025 at 7:30 PM
If such a computation was actually being used by the brain, what do we expect? Well, first of all, it should act like a decision variable, which means that any stimulus close to the decision boundary will be hard to decide, and will have long response times. 19/n
The same idea would work for symmetry tasks: because the symmetric object has the same visual features repeated, it will "stand apart" compared to asymmetric or visually heterogeneous arrays. Thus we could "solve" a symmetry task by computing the distance to some center 17/n
April 18, 2025 at 7:30 PM
The same idea would work for symmetry tasks: because the symmetric object has the same visual features repeated, it will "stand apart" compared to asymmetric or visually heterogeneous arrays. Thus we could "solve" a symmetry task by computing the distance to some center 17/n
So, when you see an array containing identical items the neural response is equal to the single item response (well, almost, to an approximation). But when you see an array containing an oddball, its representation is somewhere between the neural response to the two items. 15/n
April 18, 2025 at 7:30 PM
So, when you see an array containing identical items the neural response is equal to the single item response (well, almost, to an approximation). But when you see an array containing an oddball, its representation is somewhere between the neural response to the two items. 15/n
But there's an entire other category of visual tasks that do not fall into this framework. These tasks involve searching for a particular property, like determining if two images are same or different, deciding if there's an odd one out, and even deciding symmetry! 7/n
April 18, 2025 at 7:30 PM
But there's an entire other category of visual tasks that do not fall into this framework. These tasks involve searching for a particular property, like determining if two images are same or different, deciding if there's an odd one out, and even deciding symmetry! 7/n
You see, most visual tasks are feature-based. If you are searching for the handsome Georgin in a crowd while ignoring other ugly distractors, you'd have to train a classifier, and project any image to get a decision variable. This is the standard model for decision making 6/n
April 18, 2025 at 7:30 PM
You see, most visual tasks are feature-based. If you are searching for the handsome Georgin in a crowd while ignoring other ugly distractors, you'd have to train a classifier, and project any image to get a decision variable. This is the standard model for decision making 6/n
Don't believe it? Below are some example arrays in which you should confirm if there is or isn't an oddball. You might find that its easy to confirm that there's no target in the rope array (mean RT = 1.06 s) than in the leaf array (mean RT = 1.45 s). Why?!! 4/n
April 18, 2025 at 7:30 PM
Don't believe it? Below are some example arrays in which you should confirm if there is or isn't an oddball. You might find that its easy to confirm that there's no target in the rope array (mean RT = 1.06 s) than in the leaf array (mean RT = 1.45 s). Why?!! 4/n
Apply to our awesome PhD program! Deadline Apr 6
cns.iisc.ac.in/academics/ph...
cns.iisc.ac.in/academics/ph...
March 21, 2025 at 2:10 AM
Apply to our awesome PhD program! Deadline Apr 6
cns.iisc.ac.in/academics/ph...
cns.iisc.ac.in/academics/ph...
Because our study involved comparing dissimilarities, we realized we could do a similar analysis on pre-trained deep networks and found that deep networks also showed similar trends as humans but only for simple but not for complex shapes!
December 7, 2024 at 7:25 PM
Because our study involved comparing dissimilarities, we realized we could do a similar analysis on pre-trained deep networks and found that deep networks also showed similar trends as humans but only for simple but not for complex shapes!
…as well as complex shapes
December 7, 2024 at 7:25 PM
…as well as complex shapes
We recruited human participants to perform lots of such searches, and this is exactly what we found: people searched among occluded displays just like they did among likely completions than unlikely completions! This was true for simple shapes
December 7, 2024 at 7:25 PM
We recruited human participants to perform lots of such searches, and this is exactly what we found: people searched among occluded displays just like they did among likely completions than unlikely completions! This was true for simple shapes
We thought, maybe we can check if similarity relations between occluded displays behaved more like relations between likely displays, it would show that occluded displays are like the likely displays.
December 7, 2024 at 7:25 PM
We thought, maybe we can check if similarity relations between occluded displays behaved more like relations between likely displays, it would show that occluded displays are like the likely displays.
But finding the mosaic/cut target is easy, which means they are dissimilar. This all seems to support what we think, that we really do see the occluded object as a complete circle behind the occluder. BUT here comes the catch: if you look closely, they also reported that:
December 7, 2024 at 7:25 PM
But finding the mosaic/cut target is easy, which means they are dissimilar. This all seems to support what we think, that we really do see the occluded object as a complete circle behind the occluder. BUT here comes the catch: if you look closely, they also reported that:
This is where we enter a rabbit hole. Turns out, there's a classic study by Rensink and Enns back in 1998 that showed searching for the occluded display among the "likely" display is easy
December 7, 2024 at 7:25 PM
This is where we enter a rabbit hole. Turns out, there's a classic study by Rensink and Enns back in 1998 that showed searching for the occluded display among the "likely" display is easy
Visual search is also a fantastic task because its natural, intuitive, and a great way to study how similar two objects are in perception. For example, in the display below, its really easy to find a T among Ns but not so easy to find the W
December 7, 2024 at 7:25 PM
Visual search is also a fantastic task because its natural, intuitive, and a great way to study how similar two objects are in perception. For example, in the display below, its really easy to find a T among Ns but not so easy to find the W
It looks like a spiky object hidden behind a brick wall right? But how do you know the hidden part is spiky? I mean, logically the object could just as well been one of the two below:
December 7, 2024 at 7:25 PM
It looks like a spiky object hidden behind a brick wall right? But how do you know the hidden part is spiky? I mean, logically the object could just as well been one of the two below:
In a new study, out now in Attention Perception & Psychophysics, Thomas Cherian (@thomascherian.bsky.social) and I have some fascinating insights into what we see behind an occluder. Like all good things, the origin of this study was simple curiosity. Consider the picture below:
December 7, 2024 at 7:25 PM
In a new study, out now in Attention Perception & Psychophysics, Thomas Cherian (@thomascherian.bsky.social) and I have some fascinating insights into what we see behind an occluder. Like all good things, the origin of this study was simple curiosity. Consider the picture below:
Come attend the awesome Bangalore Cognition Workshop at IISc from June 15-21 2024! Apply at
forms.gle/kfA1obX8CkPg...
Deadline: Feb 29 2024 Please circulate widely!
forms.gle/kfA1obX8CkPg...
Deadline: Feb 29 2024 Please circulate widely!
February 15, 2024 at 9:07 AM
Come attend the awesome Bangalore Cognition Workshop at IISc from June 15-21 2024! Apply at
forms.gle/kfA1obX8CkPg...
Deadline: Feb 29 2024 Please circulate widely!
forms.gle/kfA1obX8CkPg...
Deadline: Feb 29 2024 Please circulate widely!
Ready or not....here I come to SFN 2023 to highlight some exciting work from our lab on 256-ch wireless recordings from IT & PMv in freely moving monkeys on Mon Nov 13 at 9:00AM on Poster # DD24 in the session "Visual responses during behavior II". Come check it out!
November 10, 2023 at 7:43 PM
Ready or not....here I come to SFN 2023 to highlight some exciting work from our lab on 256-ch wireless recordings from IT & PMv in freely moving monkeys on Mon Nov 13 at 9:00AM on Poster # DD24 in the session "Visual responses during behavior II". Come check it out!