Guillaume Bellec
banner
bellecguill.bsky.social
Guillaume Bellec
@bellecguill.bsky.social
AI, Neuroscience and Music
To show this funny "primate brain bias" in Gemini. I copy pasted your message as a correction prompt (first image). Gyri disappeared which is good. And then i tried to insist to put the cerebellum behind and not below.

It still likes primate brain too much apparently.
November 21, 2025 at 2:37 PM
Exactly, Gemini loves to draw primate brains. It was ridiculous in G2.5, and cortices were all over the place.

I thought it was corrected in 3. Apparently not. Good there is a long way to fool scientists with good anatomical knowledge.

Imo, area locations were consistent with this image, isn't it?
November 21, 2025 at 2:25 PM
Tried to correct the big flaws detected by @ctestard.bsky.social and @ackurth.bsky.social .

Well clearly Gemini 3 is still strongly attracted to draw human or primate brains. The teeth look better though
November 21, 2025 at 12:39 PM
Thanks 🙏! Indeed, I was too convinced by the olfactory bulb but many things are missing. Good to see that the AI models have a long way to go.

My knowledge of mouse anatomy, is quite limited as you can see. The location of the cortices looked good to me. Any opinion about that?
November 21, 2025 at 12:18 PM
Good catch!

Should probably look more like this ideally:
fr.wikipedia.org/wiki/Fichier...
November 21, 2025 at 12:11 PM
Now edit capability. Also nailed locations of M2 and S2 (as far as I can fact check myself).

Doing figures will never be the same.
November 21, 2025 at 11:45 AM
I am impressed by the improvement with Gemini 3 and nano 🍌.

I used nano banana to make scientific figures. With 2.5 it was repetitively putting a human brain inside the 🐭 head.

Now it draws an accurate mouse brain anatomy even seems to locate correctly the cortical areas. Big jump imo
November 21, 2025 at 11:32 AM
This summer I am on a scientific train tour 🧪🧠 🚂 now in Florence for CNS conference.

Austrian night trains are amazing. I tested the new mini-sleeper cabin (economy class).

5 ⭐ sleeping experience, but a bit small to eat the (included!) breakfast 🍞 ☕
July 7, 2025 at 3:29 PM
So why "perturbation [data] are more special" ?

Here is a toy math model: 2 area system can be feedforward or recurrent (hypothetical mechanism 1 or 2) and produce the same activity distribution.

With an opto inactivation you separate the two hypothesis right away. Is that convincing?
February 14, 2025 at 10:52 AM
Again, call me crazy! We argue that a perturbation-robust RNN enable measurements of brain gradients.

This is bc, mathematically, the effect of μ-perturbations is one taylor expansion away from RNN grads. So -- if RNN is robust -- grads of the RNN approx grads in the recorded circuit. Cool !

6/8
January 8, 2025 at 4:33 PM
To speculate why perturbation-robust RNN will become important:

We simulate a read-write opto experiment setup where a robust RNN is used to target optimal μ-perturbations and change simulated mouse behavior in real-time.

(We also think it's a bit crazy... but it works in simulation)

5/8
January 8, 2025 at 4:33 PM
We tested this in-vivo with multi-area recordings in mice covering 6 areas from sensory to motor cortices. Our RNNs also predict jaw movements recorded with a camera.

The results are consistent with the artificial data. Dale's law, local inhibition (and spikes) make the model more robust.

4/8
January 8, 2025 at 4:33 PM
We make RNN variants with added bio-features (e.g. Dale's law).

Empirically, features that improve robustness the best are:

- Dale's law: E/I weights are +/-
- Local inhibition: I do not project to other areas

Other features improve less:
- Replacing σ with spikes
- Sparsity prior

3/8
January 8, 2025 at 4:33 PM
We train RNNs to fit spike train recordings (in-vivo in mice or artifical data). RNN units are mapped 1-to-1 with brain cells, so we can simulate opto-activation of a cell-type in one area.

Vanilla σRNN predict very well before perturbation, but their response after perturbation is very wrong.

2/8
January 8, 2025 at 4:33 PM
Pre-print 🧠🧪
Is mechanism modeling dead in the AI era?

ML models trained to predict neural activity fail to generalize to unseen opto perturbations. But mechanism modeling can solve that.

We say "perturbation testing" is the right way to evaluate mechanisms in data-constrained models

1/8
January 8, 2025 at 4:33 PM
We make RNN variants with added bio-features (e.g. Dale's law).

Empirically, features that improve robustness to perturbations are:

- Dale's law: E/I weights are +/-
- Local inhibition: I do not project to other areas

Other features improve less:
- Replacing σ with spikes
- Sparsity prior

3/8
January 8, 2025 at 4:10 PM
We train RNNs to fit spike train recordings (in-vivo in mice or artifical data). RNN units are mapped 1-to-1 with brain cells, so we can simulate opto-activation of a cell-type in one area.

Vanilla σRNN predict very well before perturbation, but their response after perturbation is very wrong.

2/8
January 8, 2025 at 4:10 PM
In the end the model exhibits the correct distribution of neural activity and behavior.

This publication was only possible because of the hard work of Christos Sourmpis. Congrats!

Thank you also to Carl Petersen and Wulfram Gersnter for the guidance and support.
November 26, 2023 at 9:07 AM
We use GPUs, pytorch and back prop in spiking RNNs to generate activity statistics consistent with the recordings.

The network is constrained to data from 28 recording sessions and data across relevant sensory and motor cortices is used.

The model even has to produce coherent Jaw movement.
November 26, 2023 at 8:58 AM
Our paper will be presented at NeurIPS next month.

Trail Matching: how to fit a large spiking neural network to thousands of recorded neurons.
November 26, 2023 at 8:51 AM