Hannes Mehrer
@hannesmehrer.bsky.social
Computational neuroscientist, NeuroAI lab @EPFL
Model perceptual changes via simulated microstimulation
We visualized perceptual changes from simulated stimulations in model face-selective regions. This results in face-related changes: additional faces appear (#1), face becomes larger (#1161), or specific face-features get enhanced (#533).
We visualized perceptual changes from simulated stimulations in model face-selective regions. This results in face-related changes: additional faces appear (#1), face becomes larger (#1161), or specific face-features get enhanced (#533).
October 7, 2025 at 3:22 PM
Model perceptual changes via simulated microstimulation
We visualized perceptual changes from simulated stimulations in model face-selective regions. This results in face-related changes: additional faces appear (#1), face becomes larger (#1161), or specific face-features get enhanced (#533).
We visualized perceptual changes from simulated stimulations in model face-selective regions. This results in face-related changes: additional faces appear (#1), face becomes larger (#1161), or specific face-features get enhanced (#533).
Experiment 2
With a slightly different site-selection criterion, stimulation shifted behavior above baseline in monkey 1 (Cohen’s d=0.67), though our model was not able to accurately predict monkey behavior anymore.
With a slightly different site-selection criterion, stimulation shifted behavior above baseline in monkey 1 (Cohen’s d=0.67), though our model was not able to accurately predict monkey behavior anymore.
October 7, 2025 at 3:22 PM
Experiment 2
With a slightly different site-selection criterion, stimulation shifted behavior above baseline in monkey 1 (Cohen’s d=0.67), though our model was not able to accurately predict monkey behavior anymore.
With a slightly different site-selection criterion, stimulation shifted behavior above baseline in monkey 1 (Cohen’s d=0.67), though our model was not able to accurately predict monkey behavior anymore.
Experiment 1
Model-predicted behavioral shifts correlated with stimulation-evoked behavioral shifts in both monkeys. While predicted model responses were strong, monkey behavior was not shifted above baseline.
Model-predicted behavioral shifts correlated with stimulation-evoked behavioral shifts in both monkeys. While predicted model responses were strong, monkey behavior was not shifted above baseline.
October 7, 2025 at 3:22 PM
Experiment 1
Model-predicted behavioral shifts correlated with stimulation-evoked behavioral shifts in both monkeys. While predicted model responses were strong, monkey behavior was not shifted above baseline.
Model-predicted behavioral shifts correlated with stimulation-evoked behavioral shifts in both monkeys. While predicted model responses were strong, monkey behavior was not shifted above baseline.
Visual stimuli via GANs
We generate image sequences that smoothly modulate neural activity along a stimulation site’s tuning dimension. This links visual input to the direction of activation changes resulting from microstimulation (Papale et al. 2024: www.biorxiv.org/content/10.1...)
We generate image sequences that smoothly modulate neural activity along a stimulation site’s tuning dimension. This links visual input to the direction of activation changes resulting from microstimulation (Papale et al. 2024: www.biorxiv.org/content/10.1...)
October 7, 2025 at 3:22 PM
Visual stimuli via GANs
We generate image sequences that smoothly modulate neural activity along a stimulation site’s tuning dimension. This links visual input to the direction of activation changes resulting from microstimulation (Papale et al. 2024: www.biorxiv.org/content/10.1...)
We generate image sequences that smoothly modulate neural activity along a stimulation site’s tuning dimension. This links visual input to the direction of activation changes resulting from microstimulation (Papale et al. 2024: www.biorxiv.org/content/10.1...)
How it works
1. Map the in-silico cortical sheet of a topographic model to the monkey cortex.
2. Optimize stimulation parameters by prototyping experiments in the model.
3. Only test those parameters in-vivo that are predicted to yield the largest behavioral effects.
1. Map the in-silico cortical sheet of a topographic model to the monkey cortex.
2. Optimize stimulation parameters by prototyping experiments in the model.
3. Only test those parameters in-vivo that are predicted to yield the largest behavioral effects.
October 7, 2025 at 3:22 PM
How it works
1. Map the in-silico cortical sheet of a topographic model to the monkey cortex.
2. Optimize stimulation parameters by prototyping experiments in the model.
3. Only test those parameters in-vivo that are predicted to yield the largest behavioral effects.
1. Map the in-silico cortical sheet of a topographic model to the monkey cortex.
2. Optimize stimulation parameters by prototyping experiments in the model.
3. Only test those parameters in-vivo that are predicted to yield the largest behavioral effects.
Stimulate high-level vs early visual cortex
Visual prosthetics in early visual areas can evoke simple percepts (letters), but they are limited by 1. electrode count and 2. low-level features. We target high-level cortex to elicit percepts of more complex objects.
Visual prosthetics in early visual areas can evoke simple percepts (letters), but they are limited by 1. electrode count and 2. low-level features. We target high-level cortex to elicit percepts of more complex objects.
October 7, 2025 at 3:22 PM
Stimulate high-level vs early visual cortex
Visual prosthetics in early visual areas can evoke simple percepts (letters), but they are limited by 1. electrode count and 2. low-level features. We target high-level cortex to elicit percepts of more complex objects.
Visual prosthetics in early visual areas can evoke simple percepts (letters), but they are limited by 1. electrode count and 2. low-level features. We target high-level cortex to elicit percepts of more complex objects.
Model perceptual changes via simulated microstimulation
We visualized perceptual changes from simulated stimulations in model face-selective regions. This results in face-related changes: additional faces appear (#1), face becomes larger (#1161), or specific face-features get enhanced (#533).
We visualized perceptual changes from simulated stimulations in model face-selective regions. This results in face-related changes: additional faces appear (#1), face becomes larger (#1161), or specific face-features get enhanced (#533).
October 7, 2025 at 2:37 PM
Model perceptual changes via simulated microstimulation
We visualized perceptual changes from simulated stimulations in model face-selective regions. This results in face-related changes: additional faces appear (#1), face becomes larger (#1161), or specific face-features get enhanced (#533).
We visualized perceptual changes from simulated stimulations in model face-selective regions. This results in face-related changes: additional faces appear (#1), face becomes larger (#1161), or specific face-features get enhanced (#533).
Experiment 2
With a slightly different site-selection criterion, stimulation shifted behavior above baseline in monkey 1 (Cohen’s d=0.67), though our model was not able to accurately predict monkey behavior anymore.
With a slightly different site-selection criterion, stimulation shifted behavior above baseline in monkey 1 (Cohen’s d=0.67), though our model was not able to accurately predict monkey behavior anymore.
October 7, 2025 at 2:37 PM
Experiment 2
With a slightly different site-selection criterion, stimulation shifted behavior above baseline in monkey 1 (Cohen’s d=0.67), though our model was not able to accurately predict monkey behavior anymore.
With a slightly different site-selection criterion, stimulation shifted behavior above baseline in monkey 1 (Cohen’s d=0.67), though our model was not able to accurately predict monkey behavior anymore.
Experiment 1
Model-predicted behavioral shifts correlated with stimulation-evoked behavioral shifts in both monkeys. While predicted model responses were strong, monkey behavior was not shifted above baseline.
Model-predicted behavioral shifts correlated with stimulation-evoked behavioral shifts in both monkeys. While predicted model responses were strong, monkey behavior was not shifted above baseline.
October 7, 2025 at 2:37 PM
Experiment 1
Model-predicted behavioral shifts correlated with stimulation-evoked behavioral shifts in both monkeys. While predicted model responses were strong, monkey behavior was not shifted above baseline.
Model-predicted behavioral shifts correlated with stimulation-evoked behavioral shifts in both monkeys. While predicted model responses were strong, monkey behavior was not shifted above baseline.
Visual stimuli via GANs
We generate image sequences that smoothly modulate neural activity along a stimulation site’s tuning dimension. This links visual input to the direction of activation changes resulting from microstimulation (Papale et al. 2024: www.biorxiv.org/content/10.1...)
We generate image sequences that smoothly modulate neural activity along a stimulation site’s tuning dimension. This links visual input to the direction of activation changes resulting from microstimulation (Papale et al. 2024: www.biorxiv.org/content/10.1...)
October 7, 2025 at 2:37 PM
Visual stimuli via GANs
We generate image sequences that smoothly modulate neural activity along a stimulation site’s tuning dimension. This links visual input to the direction of activation changes resulting from microstimulation (Papale et al. 2024: www.biorxiv.org/content/10.1...)
We generate image sequences that smoothly modulate neural activity along a stimulation site’s tuning dimension. This links visual input to the direction of activation changes resulting from microstimulation (Papale et al. 2024: www.biorxiv.org/content/10.1...)
How it works
1. Map the in-silico cortical sheet of a topographic model to the monkey cortex.
2. Optimize stimulation parameters by prototyping experiments in the model.
3. Only test those parameters in-vivo that are predicted to yield the largest behavioral effects.
1. Map the in-silico cortical sheet of a topographic model to the monkey cortex.
2. Optimize stimulation parameters by prototyping experiments in the model.
3. Only test those parameters in-vivo that are predicted to yield the largest behavioral effects.
October 7, 2025 at 2:37 PM
How it works
1. Map the in-silico cortical sheet of a topographic model to the monkey cortex.
2. Optimize stimulation parameters by prototyping experiments in the model.
3. Only test those parameters in-vivo that are predicted to yield the largest behavioral effects.
1. Map the in-silico cortical sheet of a topographic model to the monkey cortex.
2. Optimize stimulation parameters by prototyping experiments in the model.
3. Only test those parameters in-vivo that are predicted to yield the largest behavioral effects.
Stimulate high-level vs early visual cortex
Visual prosthetics in early visual areas can evoke simple percepts (letters), but they are limited by 1. electrode count and 2. low-level features. We target high-level cortex to elicit pecepts of more complex objects.
Visual prosthetics in early visual areas can evoke simple percepts (letters), but they are limited by 1. electrode count and 2. low-level features. We target high-level cortex to elicit pecepts of more complex objects.
October 7, 2025 at 2:37 PM
Stimulate high-level vs early visual cortex
Visual prosthetics in early visual areas can evoke simple percepts (letters), but they are limited by 1. electrode count and 2. low-level features. We target high-level cortex to elicit pecepts of more complex objects.
Visual prosthetics in early visual areas can evoke simple percepts (letters), but they are limited by 1. electrode count and 2. low-level features. We target high-level cortex to elicit pecepts of more complex objects.
Also from Amsterdam (Christina Sartzetaki, Gemma Roig, Cees G.M. Snoek, Iris Groen): Using RSA, 100 models are evaluated on the bold moments dataset (doi.org/10.1038/s414..., data available on openneuro: openneuro.org/datasets/ds0...).
April 28, 2025 at 3:17 AM
Also from Amsterdam (Christina Sartzetaki, Gemma Roig, Cees G.M. Snoek, Iris Groen): Using RSA, 100 models are evaluated on the bold moments dataset (doi.org/10.1038/s414..., data available on openneuro: openneuro.org/datasets/ds0...).
Similar to the @bashivan.bsky.social 2019 paper on neural population control (doi.org/10.1126/scie...), a team from Amsterdam (Diego Cerdas, Christina Sartzetaki, Magnus Petersen, Pacal Mettes, Iris Groen)
April 28, 2025 at 3:17 AM
Similar to the @bashivan.bsky.social 2019 paper on neural population control (doi.org/10.1126/scie...), a team from Amsterdam (Diego Cerdas, Christina Sartzetaki, Magnus Petersen, Pacal Mettes, Iris Groen)
Smoothly varying maps can also emerge in vision (note, here not! language) models through credit-based self-organizing maps.
April 28, 2025 at 3:17 AM
Smoothly varying maps can also emerge in vision (note, here not! language) models through credit-based self-organizing maps.
Another form of a topographic (language) model: TopoNets.
Here, the spatial loss is based on the difference between a layer's unit activations and a blurred (/low-pass filtered) version of it.
Here, the spatial loss is based on the difference between a layer's unit activations and a blurred (/low-pass filtered) version of it.
April 28, 2025 at 3:17 AM
Another form of a topographic (language) model: TopoNets.
Here, the spatial loss is based on the difference between a layer's unit activations and a blurred (/low-pass filtered) version of it.
Here, the spatial loss is based on the difference between a layer's unit activations and a blurred (/low-pass filtered) version of it.
Before ICLR 2025 comes to an end today, a few #NeuroAI impressions from Singapore.
First, very happy to present our work on TopoLM as an oral, here with
@neilrathi.bsky.social
initial thread: bsky.app/profile/hann...
paper: doi.org/10.48550/arX...
code: github.com/epflneuroailab
First, very happy to present our work on TopoLM as an oral, here with
@neilrathi.bsky.social
initial thread: bsky.app/profile/hann...
paper: doi.org/10.48550/arX...
code: github.com/epflneuroailab
April 28, 2025 at 3:17 AM
Before ICLR 2025 comes to an end today, a few #NeuroAI impressions from Singapore.
First, very happy to present our work on TopoLM as an oral, here with
@neilrathi.bsky.social
initial thread: bsky.app/profile/hann...
paper: doi.org/10.48550/arX...
code: github.com/epflneuroailab
First, very happy to present our work on TopoLM as an oral, here with
@neilrathi.bsky.social
initial thread: bsky.app/profile/hann...
paper: doi.org/10.48550/arX...
code: github.com/epflneuroailab
Together with
@neil_rathi
I will present our #ICLR2025 Oral paper on TopoLM, a topographic language model!
Oral: Friday, 25 Apr 4:18 p.m. (session 4C)
Poster: Friday, 25 Apr 10 a.m. --> Hall 3 + Hall 2B Paper: arxiv.org/abs/2410.11516
Code and weights: github.com/epflneuroailab
@neil_rathi
I will present our #ICLR2025 Oral paper on TopoLM, a topographic language model!
Oral: Friday, 25 Apr 4:18 p.m. (session 4C)
Poster: Friday, 25 Apr 10 a.m. --> Hall 3 + Hall 2B Paper: arxiv.org/abs/2410.11516
Code and weights: github.com/epflneuroailab
April 23, 2025 at 5:30 AM
Together with
@neil_rathi
I will present our #ICLR2025 Oral paper on TopoLM, a topographic language model!
Oral: Friday, 25 Apr 4:18 p.m. (session 4C)
Poster: Friday, 25 Apr 10 a.m. --> Hall 3 + Hall 2B Paper: arxiv.org/abs/2410.11516
Code and weights: github.com/epflneuroailab
@neil_rathi
I will present our #ICLR2025 Oral paper on TopoLM, a topographic language model!
Oral: Friday, 25 Apr 4:18 p.m. (session 4C)
Poster: Friday, 25 Apr 10 a.m. --> Hall 3 + Hall 2B Paper: arxiv.org/abs/2410.11516
Code and weights: github.com/epflneuroailab