Nicola Sambuco
banner
nicolasambuco.bsky.social
Nicola Sambuco
@nicolasambuco.bsky.social
Assistant Professor (fixed term- RTDA), University of Bari. Former CSEA trainee at UF
Curating reward processing research → RewardSignals feed (#RewardSignals).
Quoting @gangchen6.bsky.social:
"Next up: bringing this to everyday analysis.

AFNI’s new program SIMBA is in development and aims to make full whole-brain voxel-level hierarchical modeling accessible to users, hopefully within the next few months."

Maybe he'll make for us a great Xmas present! 😂
November 21, 2025 at 8:35 AM
Reposted by Nicola Sambuco
If your ultimate inference target is the group level, then what matters is the joint contribution of trial number per condition and participant sample size, not either one in isolation. We explored this point in detail here:
www.sciencedirect.com/science/arti...
Hyperbolic trade-off: The importance of balancing trial and subject sample sizes in neuroimaging
Here we investigate the crucial role of trials in task-based neuroimaging from the perspectives of statistical efficiency and condition-level generali…
www.sciencedirect.com
November 21, 2025 at 3:37 AM
@pessoabrain.bsky.social - that’s something that you’re very familiar with as well. Thoughts about the minimum number of trials per condition to run bayesian multilevel modeling?
November 20, 2025 at 10:45 PM
As I said to Vinny in the past, I think this paper is simply incredible.
There’s one thing that I’m curious about these analyses: based on your experience, how many trials per condition do you think would be a decent amount to good estimates?
November 20, 2025 at 10:42 PM
What could we do to stop this plague? I keep telling students that they are doing what they are for their own good, not for grades..but many just don’t seem to care. Higher performance with minimum effort, that’s all that seem to matter
November 20, 2025 at 10:36 PM
I might be wrong on this, but sometimes I have more of the “wow factor” by reading papers from the 90s than I have when reading the lasted flashy paper on nature something.
It might just be me..but I have the feeling that we are just accumulating papers, not knowledge.
November 20, 2025 at 10:20 PM
But how about..slowing down? The publication rate is too high, because the system we built is meant to foster careers and not good science. And all this is doing is making big publishing groups reach at the expenses of researchers work.
November 20, 2025 at 6:36 PM
but that's the plot twist: they are pushing us to fight to keep the DEI 😂
November 20, 2025 at 5:31 PM
@davidbaranger.bsky.social - I’m genuinely curious about your view here. I know your work on reward processing and reliability and see it as a reference point in the field, so I’d be really interested in any nuances you’ve picked up after working on these questions.
November 20, 2025 at 2:38 PM
3/3
Knocking down ERα in midbrain DA neurons blunts sensitivity to reward context without changing thirst. Nice link between estrous cycle, dopamine RPEs and reinforcement learning, with obvious implications for sex differences & menstrual-cycle effects in psychiatry.
November 19, 2025 at 10:29 PM
2/3
Using GRAB-DA photometry + modeling, NAcc dopamine encodes reward prediction errors, and high estradiol boosts especially large positive RPEs. Proteomics points to a mechanism: reduced DAT/SERT expression → slower reuptake → bigger phasic DA signals.
November 19, 2025 at 10:29 PM
Reposted by Nicola Sambuco
And the next step? Full voxel-level modeling.

Recent numerical advances cracked the scalability barrier. Voxel-level hierarchical modeling is now feasible, revealing just how punishing traditional multiple-comparison adjustments really are.
arxiv.org/abs/2511.12825
SIMBA: Scalable Image Modeling using a Bayesian Approach, A Consistent Framework for Including Spatial Dependencies in fMRI Studies
Bayesian spatial modeling provides a flexible framework for whole-brain fMRI analysis by explicitly incorporating spatial dependencies, overcoming the limitations of traditional massive univariate app...
arxiv.org
November 18, 2025 at 10:13 PM
3/
fMRI: despite inhibiting vmPFC, cTBS amplified reward-prediction-error BOLD in vmPFC, mediodorsal thalamus, and dorsal striatum.
Authors interpret this as a shift from fast, vmPFC-driven Pavlovian invigoration toward slower, more uncertain thalamo-striatal learning. #dopamine #fMRI
November 17, 2025 at 9:21 PM
2/
Design & behavior: single-blind vmPFC cTBS vs sham before a motivational Go/NoGo task in the scanner.
cTBS →
- fewer Go responses
- slower RTs
- RL modelling: selective drop in positive learning rate (gains learned more slowly), trend toward reduced Pavlovian bias.
November 17, 2025 at 9:21 PM
That’s an incredibly accurate analogy
November 17, 2025 at 9:15 PM