Jenelle Feather
@jfeather.bsky.social
Flatiron Research Fellow #FlatironCCN. PhD from #mitbrainandcog. Incoming Asst Prof #CarnegieMellon in Fall 2025. I study how humans and computers hear and see.
In a second example, we apply our method to a set of deep neural network models and reveal differences in the local geometry that arise due to architecture and training types, illustrating the method's potential for revealing interpretable differences between computational models.
April 24, 2025 at 5:13 AM
In a second example, we apply our method to a set of deep neural network models and reveal differences in the local geometry that arise due to architecture and training types, illustrating the method's potential for revealing interpretable differences between computational models.
As an example, we use this framework to compare a set of simple models of the early visual system, identifying a novel set of image distortions that allow immediate comparison of the models by visual inspection.
April 24, 2025 at 5:13 AM
As an example, we use this framework to compare a set of simple models of the early visual system, identifying a novel set of image distortions that allow immediate comparison of the models by visual inspection.
We then extend this work to show that the metric may be used to optimally differentiate a set of *many* models, by finding a pair of “principal distortions” that maximize the variance of the models under this metric.
April 24, 2025 at 5:13 AM
We then extend this work to show that the metric may be used to optimally differentiate a set of *many* models, by finding a pair of “principal distortions” that maximize the variance of the models under this metric.
We use the FIM to define a metric on the local geometry of an image representation near a base image. This metric can be related to previous work investigating the sensitivities of one or two models.
April 24, 2025 at 5:13 AM
We use the FIM to define a metric on the local geometry of an image representation near a base image. This metric can be related to previous work investigating the sensitivities of one or two models.
We propose a framework for comparing a set of image representations in terms of their local geometries. We quantify the local geometry of a representation using the Fisher information matrix (FIM), a standard statistical tool for characterizing the sensitivity to local stimulus distortions.
April 24, 2025 at 5:13 AM
We propose a framework for comparing a set of image representations in terms of their local geometries. We quantify the local geometry of a representation using the Fisher information matrix (FIM), a standard statistical tool for characterizing the sensitivity to local stimulus distortions.
Recent work suggests that many models are converging to representations that are similar to each other and (maybe) to human perception. However, similarity often focuses on stimuli that are far apart in stimulus space. Even if global geometry is similar, the local geometry can be quite different.
April 24, 2025 at 5:13 AM
Recent work suggests that many models are converging to representations that are similar to each other and (maybe) to human perception. However, similarity often focuses on stimuli that are far apart in stimulus space. Even if global geometry is similar, the local geometry can be quite different.
At #NeurIPS2023? Interested in brains, neural networks, and geometry? Come by our **Spotlight Poster** Tuesday @ 5:15PM (#1914) on A Spectral Theory of Neural Prediction and Alignment.
paper: openreview.net/forum?id=5B1...
w/ Abdul Canatar, Albert Wakhloo & SueYeon Chung @sueyeonchung.bsky.social
paper: openreview.net/forum?id=5B1...
w/ Abdul Canatar, Albert Wakhloo & SueYeon Chung @sueyeonchung.bsky.social
December 11, 2023 at 7:58 PM
At #NeurIPS2023? Interested in brains, neural networks, and geometry? Come by our **Spotlight Poster** Tuesday @ 5:15PM (#1914) on A Spectral Theory of Neural Prediction and Alignment.
paper: openreview.net/forum?id=5B1...
w/ Abdul Canatar, Albert Wakhloo & SueYeon Chung @sueyeonchung.bsky.social
paper: openreview.net/forum?id=5B1...
w/ Abdul Canatar, Albert Wakhloo & SueYeon Chung @sueyeonchung.bsky.social
In my favorite result of the paper, we found that human recognizability was well correlated with other-model recognizability. Thus, the discrepant metamers are due to the models having *idiosyncratic invariances* that are not shared with other models or human observers!
🧵20/N
🧵20/N
October 16, 2023 at 10:39 PM
In my favorite result of the paper, we found that human recognizability was well correlated with other-model recognizability. Thus, the discrepant metamers are due to the models having *idiosyncratic invariances* that are not shared with other models or human observers!
🧵20/N
🧵20/N
So what is happening with these model representations to cause them to be misaligned with humans? To get at this, we tested how well a model’s metamers were recognized by other models.
🧵19/N
🧵19/N
October 16, 2023 at 10:38 PM
So what is happening with these model representations to cause them to be misaligned with humans? To get at this, we tested how well a model’s metamers were recognized by other models.
🧵19/N
🧵19/N
Metamer recognizability also dissociated from other forms of robustness, such as susceptibility to class-preserving image corruptions.
🧵18/N
🧵18/N
October 16, 2023 at 10:37 PM
Metamer recognizability also dissociated from other forms of robustness, such as susceptibility to class-preserving image corruptions.
🧵18/N
🧵18/N
We also examined other sources of adversarial robustness: architectural changes to reduce aliasing (the “Lowpass” model) and a V1-inspired front-end (“VOne” model). Although these yielded similar robustness (f), the lowpass architecture had more recognizable metamers (g).
🧵17/N
🧵17/N
October 16, 2023 at 10:37 PM
We also examined other sources of adversarial robustness: architectural changes to reduce aliasing (the “Lowpass” model) and a V1-inspired front-end (“VOne” model). Although these yielded similar robustness (f), the lowpass architecture had more recognizable metamers (g).
🧵17/N
🧵17/N
Is this just another test to assess adversarial vulnerability? NO! Even though adversarial training improved human-recognizability of model metamers, within adversarially trained models, metamer recognizability was not predicted by adversarial robustness.
🧵15/N
🧵15/N
October 16, 2023 at 10:36 PM
Is this just another test to assess adversarial vulnerability? NO! Even though adversarial training improved human-recognizability of model metamers, within adversarially trained models, metamer recognizability was not predicted by adversarial robustness.
🧵15/N
🧵15/N
We trained audio models with adversarial training and found the same result! These models also had more human-recognizable model metamers compared to their standard-trained counterparts.
🧵14/N
🧵14/N
October 16, 2023 at 10:36 PM
We trained audio models with adversarial training and found the same result! These models also had more human-recognizable model metamers compared to their standard-trained counterparts.
🧵14/N
🧵14/N
Metamers from adversarially trained models appeared more natural, and were more recognizable to humans. But note that at late stages the metamers are still less than fully recognizable - the training does not fully mitigate the discrepancy with humans.
🧵13/N
🧵13/N
October 16, 2023 at 10:36 PM
Metamers from adversarially trained models appeared more natural, and were more recognizable to humans. But note that at late stages the metamers are still less than fully recognizable - the training does not fully mitigate the discrepancy with humans.
🧵13/N
🧵13/N
Can we fix this human-model discrepancy? We found that humans were better able to recognize metamers from models trained with *adversarial training*. Adversarial examples were generated online and models were trained to associate them with the correct label.
🧵12/N
🧵12/N
October 16, 2023 at 10:36 PM
Can we fix this human-model discrepancy? We found that humans were better able to recognize metamers from models trained with *adversarial training*. Adversarial examples were generated online and models were trained to associate them with the correct label.
🧵12/N
🧵12/N
Another discrepancy between current models and humans is the tendency for models to base their judgments on texture rather than shape. However, we found that models trained to reduce this texture bias had metamers that are also comparably unrecognizable to humans.
🧵11/N
🧵11/N
October 16, 2023 at 10:35 PM
Another discrepancy between current models and humans is the tendency for models to base their judgments on texture rather than shape. However, we found that models trained to reduce this texture bias had metamers that are also comparably unrecognizable to humans.
🧵11/N
🧵11/N
Could these misaligned invariances be due to the supervised task? To get at this, we tested visual self-supervised models. Although some models had slightly more recognizable metamers at intermediate stages, human recognition was still low in absolute terms.
🧵10/N
🧵10/N
October 16, 2023 at 10:35 PM
Could these misaligned invariances be due to the supervised task? To get at this, we tested visual self-supervised models. Although some models had slightly more recognizable metamers at intermediate stages, human recognition was still low in absolute terms.
🧵10/N
🧵10/N
We quantified these observations with human behavioral experiments. By the final stages of the tested models, humans were nearly at chance on the task, even though the model represented these the same as the natural stimulus (and recognized them as such).
🧵9/N
🧵9/N
October 16, 2023 at 10:34 PM
We quantified these observations with human behavioral experiments. By the final stages of the tested models, humans were nearly at chance on the task, even though the model represented these the same as the natural stimulus (and recognized them as such).
🧵9/N
🧵9/N
We tested various supervised neural network models, including convolutional architectures, transformers, and models trained on large datasets. In all cases, model metamers generated from the final stages appeared unnatural and were generally unrecognizable to humans.
🧵7/N
🧵7/N
October 16, 2023 at 10:34 PM
We tested various supervised neural network models, including convolutional architectures, transformers, and models trained on large datasets. In all cases, model metamers generated from the final stages appeared unnatural and were generally unrecognizable to humans.
🧵7/N
🧵7/N
Successive stages of a model may build up invariance, producing successively larger sets of model metamers. Do these metamers remain recognizable to humans for commonly used computational models, as they would in a “correct” model?
🧵6/N
🧵6/N
October 16, 2023 at 10:34 PM
Successive stages of a model may build up invariance, producing successively larger sets of model metamers. Do these metamers remain recognizable to humans for commonly used computational models, as they would in a “correct” model?
🧵6/N
🧵6/N
Humans performed a classification task on metamers generated from different stages of a model. We investigated both *audio* and *visual* models. If model invariances are shared by humans, humans should be able to classify model metamers as the reference stimulus class.
🧵5/N
🧵5/N
October 16, 2023 at 10:33 PM
Humans performed a classification task on metamers generated from different stages of a model. We investigated both *audio* and *visual* models. If model invariances are shared by humans, humans should be able to classify model metamers as the reference stimulus class.
🧵5/N
🧵5/N
Invariances can be described in terms of sets in the stimulus space. For a given reference stimulus, a set of stimuli will evoke the same classification judgment as the reference. A subset of these stimuli (metamers) produce the same activations as the reference.
🧵4/N
🧵4/N
October 16, 2023 at 10:33 PM
Invariances can be described in terms of sets in the stimulus space. For a given reference stimulus, a set of stimuli will evoke the same classification judgment as the reference. A subset of these stimuli (metamers) produce the same activations as the reference.
🧵4/N
🧵4/N
We generated stimuli whose activations within an artificial neural network match those of a natural stimulus. Inspired by previous work in human color perception and visual crowding, we call these stimuli “Model Metamers.”
🧵3/N
🧵3/N
October 16, 2023 at 10:32 PM
We generated stimuli whose activations within an artificial neural network match those of a natural stimulus. Inspired by previous work in human color perception and visual crowding, we call these stimuli “Model Metamers.”
🧵3/N
🧵3/N