Josh Wilson
norcalneuro.bsky.social
Josh Wilson
@norcalneuro.bsky.social
Bad neuroscientist, worse psychologist, pretend engineer. Current phd student at Stanford. Interested in how humans and machines encode and read out visual representations.
Reposted by Josh Wilson
1/X Our new method, the Inter-Animal Transform Class (IATC), is a principled way to compare neural network models to the brain. It's the first to ensure both accurate brain activity predictions and specific identification of neural mechanisms.

Preprint: arxiv.org/abs/2510.02523
October 6, 2025 at 3:22 PM
Reposted by Josh Wilson
Excited to share that our paper is now out in Neuron @cp-neuron.bsky.social (dlvr.it/TM9zJ8).

Our perception isn't a perfect mirror of the world. It's often biased by our expectations and beliefs. How do these biases unfold over time, and what shapes their trajectory? A summary thread. (1/13)
Attractor dynamics of working memory explain a concurrent evolution of stimulus-specific and decision-consistent biases in visual estimation
People exhibit biases when perceiving features of the world, shaped by both external stimuli and prior decisions. By tracking behavioral, neural, and mechanistic markers of stimulus- and decision-rela...
dlvr.it
July 29, 2025 at 4:02 PM
Yeah. I see a lot of "if chatGPT coded all of it, how do you know it's right?" - but for me the more important thing is "if chatGPT coded all of it, how did you learn anything???"
Usually the process of writing code makes you realize you don't know the solution in enough detail. If AI fills in the blanks my making assumptions that you're not sufficiently aware of that could be dangerous. So seems like having the experience of having written code yourself would help
January 13, 2025 at 11:44 PM
starting to get the "NIH grants are a crapshoot" refrain
December 12, 2024 at 8:13 PM
Reposted by Josh Wilson
I really like this review paper by Justin Gardner: www.nature.com/articles/s41...
I keep coming back to it whenever I’m writing about models of perception.

I especially like this quote; it took me a while to fully wrap my head around it, but I think it touches on something quite fundamental.

🧠📈
December 11, 2024 at 12:28 PM
Reposted by Josh Wilson
"I published my paper on biorxiv and now I'm hoping to get it advertised in Nature"

This may seem like a diss, but 'is the exposure you get from Nature worth 10k?' has a different answer than 'is it worth paying 10k to publish a PDF online?' (which was always a strawman)
I propose we switch from saying preprint / published paper to saying publish / reprint.

"I published my paper on arxiv and I'm hoping to get it reprinted in Nature"
I'm all for the early sharing of research, but the only meaningful definition of 'preprint' is in the negative -- i.e., an article that hasn't been formally peer reviewed/published. The problem is this definition doesn't describe any evaluative process the work has undergone to get to that stage.
November 30, 2024 at 2:22 AM
Reposted by Josh Wilson
A starter pack with people who research visual sensation, perception, cognition, and memory.

Also, a curated feed just for vision science content.
November 10, 2024 at 5:50 PM
Reposted by Josh Wilson
Check out the following starter packs for suggestions on who to follow:
Cognitive neuroscience: bsky.app/starter-pack...
Neural engineering & computational neuroscience: bsky.app/starter-pack...
Affective science: bsky.app/starter-pack...
Women in neuroscience: bsky.app/starter-pack...
November 18, 2024 at 6:47 PM
fMRI *has* -- and will continue to -- shed much light on "how" questions when paired with predictive computational models of neural activity and behavior
Challenge: does FMRI have a future (apart from studies of development and ageing)? We want to know HOW the brain works and for that we need millisecond temporal resolution neuropixels, MEG, OPMs. After nearly 30 years of FMRI we know basically WHERE things happen.
November 17, 2024 at 6:16 PM