Andrey Chetverikov
banner
achetverikov.bsky.social
Andrey Chetverikov
@achetverikov.bsky.social
Associate Professor in Cognitive Psychology at the University of Bergen, Norway. I study decision-making and biases in perception and visual working memory, with occasional forays into higher level decisions. https://andreychetverikov.org
Pinned
@shansmann-roth.bsky.social and I finally finished our paper confirming a unique prediction of the Demixing Model (DM): inter-item biases in #visualworkingmemory depend on the _relative_ noise of targets and non-targets, potentially going in opposing directions. 🧵1/9
www.biorxiv.org/content/10.6...
Noise in Competing Representations Determines the Direction of Memory Biases
Our memories are reconstructions, prone to errors. Historically treated as a mere nuisance, memory errors have recently gained attention when found to be systematically shifted away from or towards no...
www.biorxiv.org
Reposted by Andrey Chetverikov
Compositional data (proportions that sum to 1) behave in ways standard models aren’t built for

I walk through why Dirichlet regression is often the right tool & what extra insight it gives using a real ex of eyetracking

#Dirichlet #r #brms #guide #eyetracking

open.substack.com/pub/mzlotean...
February 9, 2026 at 4:05 PM
Reposted by Andrey Chetverikov
Ok researchers rise and shine, it's groundhog day - what better way to get you up to date with what has been going on at the FORRT Replication Hub? forrt.org/replication-...
February 2, 2026 at 9:45 AM
yep, I think it started with Fisher & Whitney 2014 who used 'relative orientation of *previous* target' as their x axis
February 2, 2026 at 4:01 PM
and actually, it's the opposite =)
February 2, 2026 at 2:12 PM
Yeah, it's the difference between current and previous target orientation. But which way it goes? Current - previous, or previous - current? I had to look for it in the paper text.
February 1, 2026 at 2:23 PM
Dear colleagues, please use decipherable axis titles or at least write in the caption what your Greek letters mean precisely =)
January 31, 2026 at 4:42 PM
... a debate in the reference they provide. So to me it looks like ignoring a century+ work in the field to use a fancy tool. I am not saying it's a bad paper - on the contrary, it's great, I like it, - but I wish it would be better situated. 3/3
January 27, 2026 at 8:25 AM
... that you refer to just address different things (like categorization of core affect into different emotions; they refer to Barrett's work presenting this perspective). Like they say "The experience of visually evoked affect ... remains a matter of hotly contested debate " but there isn't ...2/3
January 27, 2026 at 8:25 AM
I don't think I agree that it's true - there are different theories for different levels of affective phenomena. If we talk about things that they measure (arousal/valence/beauty), the role of perceptual processes (and more widely, prediction) is well acknowledged. The majority of theories ...1/3
January 27, 2026 at 8:25 AM
Looks great, but I have to say, this isn't really true: "Far less frequently do these theories focus on the role of seeing itself (perception) [in creating affect]." I mean, empirical aesthetics are here since Fechner's times and there is even a Visual Science of Art Conference.
January 26, 2026 at 9:00 PM
hi Annika! Thanks for a detailed reply! Yep, definitely, huge heterogeneity makes synthesis difficult. Would be nice if all studies shared data, then one can use whatever scores they like in meta-analysis.
January 26, 2026 at 3:29 PM
I'm curious though if different scoring is a) intentional and b) linked to different interpretation. Do people make a new score because they want to interpret IGT differently?
How many versions of the Iowa Gambling Task (IGT) exist? And how much does this affect research using the IGT? More than you might think. 🧵
Methodological Flexibility in the Iowa Gambling Task Undermines Interpretability: A Meta-method Review: https://osf.io/4g3vr
January 26, 2026 at 9:15 AM
it is like Github? feels like a collection of repos
January 23, 2026 at 11:59 AM
what is g-node?
January 23, 2026 at 11:50 AM
I'm thinking of where to put the data and code for our recent preprint. Usually, I use OSF but it would be nice to be able to upload / update directly from VSCode. So is Github -> OSF the best way?
January 23, 2026 at 11:42 AM
A weird column in Nature. First, I find it weird that "two years of academic work" are equated with ChatGPT chat history. Also, hello, backups are useful?
www.nature.com/articles/d41...
When two years of academic work vanished with a single click
After turning off ChatGPT’s ‘data consent’ option, Marcel Bucher lost the work behind grant applications, teaching materials and publication drafts. Here’s what happened next.
www.nature.com
January 23, 2026 at 10:58 AM
Vision people, we must investigate!
With most psychedelic drugs, you never know what you're going to get. But this mysterious mushroom from China - without fail - causes users to hallucinate tiny people: crawling up walls, popping out from under furniture and marching under doors. www.bbc.com/future/artic...
'They saw them on their dishes when eating': The mushroom making people hallucinate dozens of tiny humans
Only recently described by science, the mysterious mushrooms are found in different parts of the world, but they give people the same exact visions.
www.bbc.com
January 22, 2026 at 7:38 PM
Reposted by Andrey Chetverikov
Here’s a thought that might make you tilt your head in curiosity: With every movement of your eyes, head, or body, the visual input to your eyes shifts! Nevertheless, it doesn't feel like the world does suddenly tilts sideways whenever you tilt your head. How can this be? TWEEPRINT ALERT! 🚨🧵 1/n
a husky puppy is laying on the floor with its tongue out and wearing a blue collar .
ALT: a husky puppy is laying on the floor with its tongue out and wearing a blue collar .
media.tenor.com
January 21, 2026 at 12:28 PM
Very cool work and congrats to you, Maria, and everyone else involved! It would be interesting to see if this is expected when decoder has a bunch of mixed noisy signals from two sets of reference frames or if it requires a genuine in-between signal.
January 22, 2026 at 8:31 AM
Reposted by Andrey Chetverikov
We're running a 5th edition of the always-exciting UCL Summer School on Consciousness and Metacognition this year, 8th-10th July 2026 in London. Accommodation and travel expenses are covered.

For more information and how to apply, check out metacoglab.org/summer-schoo...
Summer School - About — the MetaLab
metacoglab.org
January 20, 2026 at 5:07 PM
Reposted by Andrey Chetverikov
This is great - it's about time someone updated the discourse on LLM energy usage to reflect that coding agents use massively more prompts than occasional questions to ChatGPT

Simon estimates that a day of coding agent usage comes out close to the energy needed to run a dishwasher
Whenever I read discourse on AI energy/water use that focuses on the "median query," I can't help but feel misled. Coding agents like Claude Code send hundreds of longer-than-median queries every session, and I run dozens of sessions a day.

On my blog: www.simonpcouch.com/blog/2026-01...
January 20, 2026 at 11:10 PM
You're welcome!
January 19, 2026 at 7:44 PM
Reposted by Andrey Chetverikov
main goal for this year: find a new job! 🙂

looking for a role with fun & complex technical challenges & within a great community. my main expertise is in signal processing/EEG/MEG, but topic-wise I am quite flexible.

science/industry both great! starting mid-year. nschawor.github.io/cv
January 16, 2026 at 10:14 AM
thanks, makes sense!
January 15, 2026 at 9:29 PM
or the slope & intercept are perfectly correlated, right?
January 15, 2026 at 3:01 PM