Kobe Desender
kobedesender.bsky.social
Kobe Desender
@kobedesender.bsky.social
Assistant professor at @KU_Leuven, working on #confidence, #decisionmaking and #cognitivecontrol => DesenderLab.com
In sum, our results reveal that the sense is sensitive to the amount of time spent accumulating evidence, which is under the control of the decision boundary (set in response to expectations about difficulty), and to variations in single-trial P3 (reflecting task difficulty)
November 4, 2025 at 12:37 PM
Finally, analysis of the neural data showed that subjective effort ratings tracked neural signals associated with task difficulty (P3) but not neural signals associated with task preparation (CNV).
November 4, 2025 at 12:37 PM
hDDM fits revealed that when participants expected a hard trial, they selectively increased the decision boundary. Crucially, results of a regression model showed that effort ratings were sensitive to such variation in decision boundary (higher boundary -> longer sampling -> higher effort)
November 4, 2025 at 12:37 PM
Critically, we inserted medium difficulty trials, to asses the pure influence of expectation/preparation. On those medium trials, ppts were a bit slower and experienced more effort when they expected a difficult trial. So, what is happening here?
November 4, 2025 at 12:37 PM
When we say something 'feels effortful" what sort of computations underlie those feelings? Theoretically, subjective effort = preparation (CNV) + task difficulty (P3). To test this, participants decided whether to solve an easy/hard equation, and then actually solved an easy/medium/hard equation
November 4, 2025 at 12:37 PM
Too quick? Read the full paper :)
October 29, 2025 at 4:22 PM
In sum, contrary to the often-studied positive evidence bias (here referred to as response-congruent evidence effect), we found that variation in response-incongruent evidence contributes more to confidence - and we explain why! If you actually want to understand all of this, read the paper :)
October 29, 2025 at 4:22 PM
Finally, we confirm a novel prediction from this model: when the same "purple" stimulus is presented in the context of blue vs red stimuli, the contribution of elements to confidence (i.e. the RIE effect) should flip. This is exactly what we found (panel B)
October 29, 2025 at 4:22 PM
Robust averaging assumes that elements closer to the decision boundary have higher SNR (and thus contribute more). Indeed, dropping the robust averaging principle from the model (cf. Model 3) predicts equal regression slopes (a.k.a. a complete misfit)!
October 29, 2025 at 4:22 PM
Across 9 datasets, we observe a clear RIE: variation in response-incongruent evidence contributes MORE to confidence than variation in response-congruent evidence. This pattern is captured by a model (shades) implementing robust averaging ↓(Crucially, the model is fit on means, not on coefficients)
October 29, 2025 at 4:22 PM
When judging the average color of 8 elements, how do you weigh the individual elements when computing confidence. Classically, researchers found a response-congruent evidence bias (RCE; a.k.a. "positive evidence bias"). However, robust averaging predicts the opposite effect (RIE)
October 29, 2025 at 4:22 PM
Finally, and most importantly, Robin wrote an excellent and accessible demo which should allow anyone (you!) to get started with hMFC: github.com/robinvloeber...
GitHub - robinvloeberghs/hMFC: Repository for the Hierarchical Model for Fluctuations in Criterion (hMFC)
Repository for the Hierarchical Model for Fluctuations in Criterion (hMFC) - robinvloeberghs/hMFC
github.com
September 25, 2025 at 9:13 AM
MANY figures in the paper showing that hMFC works, but highlighting this one: with as few as 500 trials per participant hMFC allows excellent recovery of single-trial criterion, look at panel C for a representative example participant - I'm (obviously biased) impressed by this!
September 25, 2025 at 9:13 AM
We developed hMFC, a Bayesian hierarchical framework which allows estimating single-trial criterion states, by fitting data from different participants while taking into account of the nesting of data within participants.
September 25, 2025 at 9:13 AM
Ignoring fluctuations in criterion is problematic: simulations show that criterion fluctuations induce apparent history biases (panel C), lead to underestimated psychometric slopes (panel D) and underestimated measures of sensitivity, such as d' (panel D)
September 25, 2025 at 9:13 AM
Classic models of decision-making, like signal detection theory, assume that choices are made by comparing a decision variable (DV) to a criterion. Often this criterion is (implicitly) assumed to be constant; here we implement a fluctuating criterion following an autoregressive model.
September 25, 2025 at 9:13 AM
Full details, alternative valence-only models, and post-experiment questionnaires targeting awareness, etc. all in the paper!
September 25, 2025 at 8:44 AM
At the group level, our learning model won over a non-learning alternative, but more participants were actually best fitted by the latter. Closer inspection revealed why: there was a dynamic group (showing a clear confidence learning effect) and a static group (showing, well, nothing)
September 25, 2025 at 8:44 AM
At the group level, participants adapted their reporting of confidence to subtle changes in feedback (with no effects on accuracy or RTs). Panel E nicely shows how people adapt their confidence to feedback over time, panel D shows that our learning model closely captures this finding!
September 25, 2025 at 8:44 AM
To experimentally test this, we provided participants with model-generated feedback, reflecting the probability that their choice was correct. Unbeknownst to them, we alternated between between blocks with subtly higher/lower feedback
September 25, 2025 at 8:44 AM
We know (more or less) how humans compute confidence, but how do we learn to compute confidence? We propose that agents compute prediction errors (confidence-feedback) to update the weights underlying the computation of confidence
September 25, 2025 at 8:44 AM