adam osth
adamosth.bsky.social
adam osth
@adamosth.bsky.social
Associate Professor at unimelb. Episodic memory, decision making, mathematical modeling. Friend to all cats. He/him.
Reposted by adam osth
I really need people outside of Seattle to know how much Krist Novoselic sucks
February 9, 2026 at 10:07 PM
There are many other interesting things in the paper, but I would prefer not to write a thread that is 50 posts long! Check it out if you're interested!
January 29, 2026 at 5:21 AM
The timing model had no good explanation of the fast low confidence subjects. Fast decisions are inextricably tied to high confidence.
January 29, 2026 at 5:21 AM
In this case, you get decisions that are fast, but similar levels of evidence in both accumulators.
January 29, 2026 at 5:21 AM
MTR model has a counter-intuitive explanation. When there is a strong positive correlation between the drift rates of the two responses, it can actually produce fast low confidence responses.
January 29, 2026 at 5:21 AM
RTCON could explain these subjects being due to low thresholds on the low confidence responses. This allows them to respond quickly, despite having weak evidence.

Our LBA version of the model worked in the same way.
January 29, 2026 at 5:21 AM
Individual differences matter - Ratcliff and Starns found that some subjects produce fast low confidence responses.

We replicated this pattern! These participants are quite constraining for model development and testing.
January 29, 2026 at 5:21 AM
In short, each model passed the empirical hurdles, and differences between them were surprisingly subtle.

But there were some informative patterns here that are noteworthy.
January 29, 2026 at 5:21 AM
We subjected these models to detailed tests against individual participant data. The idea was to capture complete RT distributions associated with each confidence response.

We collected large datasets in order to achieve this goal.
January 29, 2026 at 5:21 AM
The third approach was a novel model we developed. In this model, time is measured by a separate accumulator.

The timing accumulator is also partitioned into thresholds. As each threshold gets passed, confidence decreases. The state of the timer at the time of the decision determines confidence.
January 29, 2026 at 5:21 AM
The second approach was done in some of our previous work with Angus Reynolds in the multiple threshold race (MTR) model.

Confidence is determined by the losing accumulator. Thresholds on the losing accumulator partition the evidence into confidence responses.
January 29, 2026 at 5:21 AM
The first approach was done in the RTCON models. Each of the confidence accumulators race against each other.

We built an LBA version of this, which was pretty straightforward. It has an advantage in that it's very tractable - doesn't require simulation like the original RTCON models do.
January 29, 2026 at 5:21 AM
These three models include:

- Confidence as a decision among multiple alternatives: different accumulators for each confidence option

- Balance of evidence

- Time as the basis of confidence
January 29, 2026 at 5:21 AM
New prerint w/ my student Haomin Chen and collaborator Andrew Heathcote!

Accumulator models are well known for being able to address choice and RT. Joint accounts of confidence are much less common.

In this work, we explored 3 LBA models of confidence.

osf.io/preprints/ps...
OSF
osf.io
January 29, 2026 at 5:21 AM
There's many other things we explored in this paper. If I recapped everything, the thread would be quite long!
January 28, 2026 at 11:41 PM
Nonetheless - where our RTCON-like model struggled was that it struggled when the number of confidence options was manipulated.

More confidence responses = more accumulators. This increases the noise in the decision and accuracy declines with more accumulators, even when thresholds can increase.
January 28, 2026 at 11:41 PM
Fast low confidence responses were first discovered by Ratcliff and Starns and justified their RTCON model. Low thresholds on the low confidence responses can capture this pattern.

The MTR can also capture this because high correlations between the accumulators can also produce the pattern.
January 28, 2026 at 11:41 PM
A surprise was that all of the models are able to clear the majority of the empirical hurdles there.

Where the timing model consistently failed is that it was unable to account for subjects that show fast low confidence responses.
January 28, 2026 at 11:41 PM
We subjected each of these models to detailed tests. We fit individual participants, where the goal is to capture RT distributions associated with each confidence rating. We collected large datasets to do just this.
January 28, 2026 at 11:41 PM
Finally - another possibility is that confidence is directly inferred from the speed of the decision.

We built a model where confidence is read off of a separate timing accumulator. This accumulator doesn't terminate - instead thresholds on the timer can be used to determine confidence.
January 28, 2026 at 11:41 PM
We also compare it to a balance of evidence model, where confidence is determined by the state of the accumulator that loses the race.

We have explored this in previous work (the MTR). Thresholds on the losing accumulator determine the confidence level.
January 28, 2026 at 11:41 PM
Probably the most detailed accumulator model of confidence is the family of RTCON models.

We designed an LBA implementation where each confidence response receives its own accumulator. A single distribution scales the drift rates to each accumulator.

The best part? It's tractable!
January 28, 2026 at 11:41 PM
I have been really hurting over the news in Minnesota. I'm having a hard time finding the words for it even now.

In short, I'm finding it very hard to be optimistic, and I'm very scared things could get much worse
January 26, 2026 at 10:57 PM
Reposted by adam osth
I just created a series of seven deep-dive videos about AI, which I've posted to youtube and now here. 😊

Targeted to laypeople, they explore how LLMs work, what they can do, and what impacts they have on learning, well-being, disinformation, the workplace, the economy, and the environment.
Part 1: How do LLMs work?
YouTube video by Andrew Perfors
www.youtube.com
January 22, 2026 at 12:45 AM
To me, I think one of the biggest gaps in the theoretical understanding of memory is encoding. We have a lot of detailed memory models that can clarify how retrieval works, but they make minimal assumptions about encoding
January 23, 2026 at 12:47 AM