Joao Barbosa
banner
jbarbosa.org
Joao Barbosa
@jbarbosa.org
INSERM group leader @ Neuromodulation Institute and NeuroSpin (Paris) in computational neuroscience.

How and why are computations enabling cognition distributed across the brain?

Expect neuroscience and ML content.

jbarbosa.org
I think I will miss a meeting at 17.30 Nov 6...🙈
October 9, 2025 at 11:44 AM
Finally, Jeanne Sentenac will show how low rank RNNs provide a really neat explanation for information selection within 'orthogonal subspaces' - purely theoretical work but heavily inspired by mattpanichello.bsky.social beautiful paper

This will be P II 58
September 26, 2025 at 4:57 PM
Philipp Werthmann will use low-rank RNN fit to large scale recordings, to show that even when 'everything is everywhere', different regions have different functions. A lot of push back on the everything is everywhere mantra, as you can see 🤓 @pessoabrain.bsky.social @benhayden.bsky.social

P III 13
September 26, 2025 at 4:57 PM
Lubna Abdul Parveen will present her PhD project on analysis of errors in the Mante task and decoding analyses form 6 regions of the monkey brain. She will show clear evidence that everything is not always everywhere.

This will be P IV 22.
September 26, 2025 at 4:57 PM
On the second (Mon), I will show how low-rank RNN fit to data are a useful tool to ask hard questions, such as studying communication subspaces from large scale recordings or the effect of neuromodulators on population dynamics.

This workshop is organized by @rdgao.bsky.social and Manuel Brenner.
September 26, 2025 at 4:57 PM
On the first workshop talk (Monday), I'll make the case that everything might be everywhere when you decode, but that is not the end of the story: different regions have very different dynamics and encoding geometries.

This workshop is organized by @aitormg.bsky.social and @ackurth.bsky.social
September 26, 2025 at 4:57 PM
Very cool, congratulations!

Ps: that behavioral map looks like the map of Paris 🤓
September 26, 2025 at 6:23 AM
Just showing off how my office looks when we DON'T work until late 😎
September 18, 2025 at 4:48 PM
Just showing off how my office looks when I work until late🤓
September 15, 2025 at 9:45 PM
We asked (and partially answered?) this question during covid in this paper

jbarbosa.org/files/Barbos...

Back then @yaelniv.bsky.social's repo was the best

nimh-dsst.github.io/OpenCogData/
September 15, 2025 at 8:46 PM
It's sad to leave Berlin behind but the move to Frankfurt came with no more online Bernstein? Why? 😭

@bernsteinneuro.bsky.social
September 14, 2025 at 8:29 AM
I recently learned: w/ lesioned 8A, you can do many WM tasks but not one👇

Guess what happens when you decoding from 8A during each of these tasks? They are all the same.

Decoding is like a quality check, it provides almost no info about function

scholar.google.com/citations?vi...
September 6, 2025 at 3:03 AM
August 11, 2025 at 6:45 AM
Sad to miss #CCN2025. It will be the 1st conference where a PhD working w/ me will speak 😭

go see Lubna's talk (Friday) about distributed neural correlates of flexible decision making in 🐒,

work done in collaboration w/ @scottbrincat.bsky.social @siegellab.bsky.social & @earlkmiller.bsky.social
August 10, 2025 at 3:56 PM
He's right, this is PhD level (lack) of math skills
August 8, 2025 at 8:57 AM
Wrong 1st screenshot 🥲
August 8, 2025 at 8:13 AM
"it's completely b-less!" 😂
August 8, 2025 at 8:10 AM
Hey look, chat gpt 5 is still an idiot.

It would be great to have a control condition for tech hyped BS:

Say openAI just copy pastes gpt N and says it's N+1. How many would say it's amazing?
August 8, 2025 at 12:21 AM
Only a metal band, of the best kind, chooses a text like this for the press release of their new album ✊🖤
August 1, 2025 at 1:16 PM
July 30, 2025 at 2:14 PM
A senior non human primate researcher by chatgpt
July 24, 2025 at 10:27 PM
She, in particular, knows well what needs to be done - or not done.

www.dw.com/en/amid-call...
July 23, 2025 at 8:14 PM
the whole blog post is a bit weird, but this in particular makes no sense 😂 this simply shows that you can get attractors in an RNN
July 8, 2025 at 3:44 PM
how is this a proof that attractors are not necessary? this just shows that trained RNN develop attractors
July 8, 2025 at 3:39 PM
Yes! It's this one. But does this mean it didn't work?
July 7, 2025 at 9:32 PM