Derek Arnold
banner
visnerd.bsky.social
Derek Arnold
@visnerd.bsky.social
Vision Scientist, Aphant
It's really not. As described here, N is a subjective self-report. You may as well ask how many fairies people can see dancing on the head of a pin. Conceptually, this is simply not a verifiable performance metric.
October 21, 2025 at 9:04 PM
I so wish : )

Failing that - I'll raise a glass to your continued good health at that time : )
August 20, 2025 at 5:08 AM
If the DV are RTs - it would be important to control for local image contrasts. If the DV is recognition, controlling for ~all image properties is futile, as these are what we recognize. If you want to know what properties we rely on, well that is a different question (its some of them)
August 20, 2025 at 5:07 AM
The problem - if there is one, is you didn't control for oriented contrast energy, spatial frequency content, local or long range curvature ect ect... Rotating an image causes big changes in these properties. Deciding if control is futile or sensible depends on context.
August 20, 2025 at 5:03 AM
Exactly - attention kicks in and re-weights image properties ect - but as you say, the images are cool, and I want a coaster : )
August 20, 2025 at 4:57 AM
If you are worried that detection or RTs might be related to contrast diffs ect - sure, control for that type of thing. But I think claiming to control for ~all image stats is futile if you still want to be able to recognize things in the image
August 20, 2025 at 4:35 AM
Obv it depends on context, but if you control for all image stats, you could not recognize - as that depends on image stats we have learnt to associate with meaning. So I never understand when papers claim to have controlled for image stats - as they haven't if people can still recognize things
August 20, 2025 at 4:32 AM
Bluesky is not a great platform for nuance : )

I also find it really hard to follow conversations here, and think people should use tildes more often

If there is any disagreement - it is with the idea that controlling for low-level confounds is a sensible goal.
August 20, 2025 at 4:12 AM
The info that is mapped to semantics is correlated image structure, that is changed when the images are reoriented. So it is a super cool demo of anagram images (I want a coaster), but it does not show that 'high-level' effects are driven by identical stimuli. You have to change the stimuli
August 20, 2025 at 2:47 AM
You just taught me a word : )

Will be looking for opportunities to refer to 'elides" : )
August 20, 2025 at 2:13 AM
Replications (stroop-like interference, aesthetics shaped by knowledge) tapped understanding entrained by recognition of correlated image structure. This doesn't seem much different to 'house' and 'horse' having different meanings. The task that didn't (a visual search) had a detection component.
August 20, 2025 at 2:08 AM
Yes, but even with multi-stable images attention acts to re-weight how we process the different features of the image. But at least that is all happening within the brain/mind, and does not rely on people detecting reliable image features
August 20, 2025 at 1:04 AM
I think attempts to dissociate the cognitive processes entrained by an image from all low-level image properties are misguided, as it is ultimately some minimal set of correlated low-level image properties that allow us to recognize anything (including faces as faces, and rabbits as rabbits)
August 19, 2025 at 10:44 PM
Only images that are multi-stable without any change (in orientation or anything else) could be said to dissociate meaning from the image structure, but even there attention kicks in, and our brains re-weight the encoded image structure by via selective processing.
August 19, 2025 at 10:40 PM
The words 'house' and 'horse' are similar, but due to subtle image structure differences it is no surprise they trigger very different associations. Similarly your images are only subtly different when rotated, but they are different and it is not surprising they can trigger different associations
August 19, 2025 at 10:34 PM
I think you are attempting the impossible. To extract meaning from an image, we detect correlations between image structure and high-level meaning. It is telling that all your tasks that have positive results are high-level, and the only null result comes from a search task that involves detection
August 19, 2025 at 10:30 PM
By changing the orientation of the image, you have changed the low-level image statistics. Once again you have shown that you cannot manipulate high-level visual properties while holding all low-level content constant
August 19, 2025 at 10:00 PM
Obviously I have no idea in this space.
August 15, 2025 at 12:40 PM
When I dream, I am Fully emmersed and embodied in the scene. At least until I guess I am dreaming, which I can conform as I don’t have any sense of touch.

People’s descriptions of imagery don’t sound like that? They sound like they are more selective no?
August 15, 2025 at 12:39 PM
In my mind, if you have to close your eyes you are not a projector. Whatever representation your brain can generate clearly cannot be projected into the world you are seeing.
August 14, 2025 at 10:02 AM
The idea of a projected experience that you can only have if you close your eyes makes no sense to me. The very idea to me is that you can experience an imagined thing as existing in the external world that you are currently seeing. Some people very clearly describe being able to do that.
August 14, 2025 at 10:01 AM
This line of thought inspired this project. If you think that mental rotation (MR) tasks are a reliable metric of people's propensity to visualise, you might draw the conclusion you outline. Instead, our data suggest MR tasks are not a reliable metric of people's propensity to visualise.
July 22, 2025 at 10:41 PM