Marlene Cohen
banner
marlenecohen.bsky.social
Marlene Cohen
@marlenecohen.bsky.social
Neuroscientist at U Chicago
This study reflects Keon’s pioneering spirit. He came to our visual neurophysiology lab from a background in psychology and haptic perception and built bridges between fields, from Bayesian models and online behavior to neuronal mechanisms of cue integration. 7/
October 30, 2025 at 10:35 PM
These strategies varied across people. Age and self-reported ADHD or Autism influenced which cues were judged most accurately and how they were integrated, suggesting that individual differences in multisensory combination may reflect broader cognitive or neural traits. 4/
October 30, 2025 at 10:35 PM
Many studies test how subjects combine information from well-practiced cues with feedback. But we often need to combine unfamiliar signals. For example, we might try to match what we see and hear when a new appliance beeps.
Keon and Doug Ruff asked how brains do that. 2/
October 30, 2025 at 10:35 PM
This is the first chapter of Grace’s thesis, and there is so much more to come. She is something special, and I am going to thoroughly enjoy seeing her take our field by storm. 9/
September 23, 2025 at 3:09 PM
We found all of these neuronal signatures in V4. But the only ones that reliably predicted behavior were related to how consistent population responses were during memory encoding and retrieval. More consistent responses = greater memory success. 6/
September 23, 2025 at 3:09 PM
Grace and awesome staff scientist Cheng Xue tested whether area V4 contains the signals that could support recognition memory. Their task revealed images bit by bit. This allowed us to analyze dynamics and increased difficulty so we could compare neuronal responses on correct vs error trials. 4/
September 23, 2025 at 3:09 PM
Most previous studies have focused on hippocampus and higher cortical areas. But behavioral work shows that memorability depends on visual features and recognition memory distinguishes even semantically similar images. Seems like a job for mid-level visual cortex. 3/
September 23, 2025 at 3:09 PM
First author Ramanujan Srinath demonstrated some of the things that make him a great scientist: he thinks deeply & creatively, brings together many forms of evidence & people, and is determined and innovative. He is on the job market this year and will run an incredible lab – don’t miss out! 5/
August 15, 2025 at 3:38 PM
Doug tested this prediction by electrically stimulating MT or dlPFC. Consistent with previous results, stimulating MT biases choices. But as predicted by our model, stimulating dlPFC elicited ‘winner take all’ behavior that was very different than MT.
10/
January 4, 2025 at 4:25 PM
The incredible Sol Markman and Jason Kim created RNN models with two modules. One was trained to mimic the formatting in MT and one was trained to mimic the formatting in dlPFC.
8/
January 4, 2025 at 4:25 PM
Consistent with ‘everything is everywhere’, Doug could decode both motion and reward information in both areas. But it was formatted differently: MT kept that information separate while dlPFC mushed them together in ways that reflected the decision strategy.
7/
January 4, 2025 at 4:25 PM
Superstar staff scientist @douglasruff.bsky.social trained subjects to flexibly make decisions based on a combination of vision (motion direction) and rewards expected from different choices, and he recorded groups of neurons in visual area MT and decision area dlPFC.
6/
January 4, 2025 at 4:25 PM
On the other hand, you all keep publishing evidence that everything an animal perceives, knows, or does can be decoded from essentially any brain area. If everything is everywhere, why have distinct brain areas?
3/
January 4, 2025 at 4:25 PM
We harnessed a variety of new approaches to resolve an emerging paradox. On one hand, animals with more cognitively complex, flexible behavior tend to have more distinct brain areas, and models that perform many tasks organize into modules.
(illustrations from the awesome bioart.niaid.nih.gov)
2/
January 4, 2025 at 4:25 PM