Isabelle Hoxha
@isabellehoxha.bsky.social
Postdoc in Cognitive and Computational Neuroscience at the University of Leiden, former postdoc at Ecole Normale Supérieure (Paris, France) and PhD student at Université Paris Saclay, interested in how we make decisions.
Okay but is there a versatile strategy that works across tasks? We simulated two mega tasks, one with all stable and another with all volatile environments. We observed that positivity bias emerged in both cases, but in the first environment we observed perseveration and in the second alternation.
September 3, 2025 at 8:51 AM
Okay but is there a versatile strategy that works across tasks? We simulated two mega tasks, one with all stable and another with all volatile environments. We observed that positivity bias emerged in both cases, but in the first environment we observed perseveration and in the second alternation.
The script flips in volatile environments: negativity bias progressively emerged as volatility increased. This time, we also observed a strong tendency for alternation, intensified as the reversal frequency increased. These results were consistent across all reversal probability distributions.
September 3, 2025 at 8:51 AM
The script flips in volatile environments: negativity bias progressively emerged as volatility increased. This time, we also observed a strong tendency for alternation, intensified as the reversal frequency increased. These results were consistent across all reversal probability distributions.
We found that in stable (no reversal) environments, positivity bias emerges every time but in rich environments, replicating the results by Cazé and Van der Meer. On the flip side, perseveration only emerged when long learning periods were involved
September 3, 2025 at 8:51 AM
We found that in stable (no reversal) environments, positivity bias emerges every time but in rich environments, replicating the results by Cazé and Van der Meer. On the flip side, perseveration only emerged when long learning periods were involved
We used an evolutionnary algorithm to find the optimal set of parameters in each of these environments, evolving 1000 agents through 200 generations.
September 3, 2025 at 8:51 AM
We used an evolutionnary algorithm to find the optimal set of parameters in each of these environments, evolving 1000 agents through 200 generations.
We ran simulations of 2-armed bandit tasks using a Q-learning model with both an asymmetric update rule and choice history bias. We tested out several difficulty levels, environment richness, learning periods, reversal frequency and probability distribution.
September 3, 2025 at 8:51 AM
We ran simulations of 2-armed bandit tasks using a Q-learning model with both an asymmetric update rule and choice history bias. We tested out several difficulty levels, environment richness, learning periods, reversal frequency and probability distribution.
On Friday, meet @anne-urai.bsky.social and I at poster C77 to discuss how we are applying Recurrent Neural Networks to perceptual decision making in mice. Looks like recurrence could explain how even perceptual decisions are made!
August 11, 2025 at 12:11 PM
On Friday, meet @anne-urai.bsky.social and I at poster C77 to discuss how we are applying Recurrent Neural Networks to perceptual decision making in mice. Looks like recurrence could explain how even perceptual decisions are made!
Very excited for @cogcompneuro.bsky.social, where I will present two nice ongoing projects!
On Tuesday, meet me at poster A105 to learn about a huge value-learning dataset we are putting together (so far 3800 human subjects). Turns out, range adaptation is all the rage!
On Tuesday, meet me at poster A105 to learn about a huge value-learning dataset we are putting together (so far 3800 human subjects). Turns out, range adaptation is all the rage!
August 11, 2025 at 12:11 PM
Very excited for @cogcompneuro.bsky.social, where I will present two nice ongoing projects!
On Tuesday, meet me at poster A105 to learn about a huge value-learning dataset we are putting together (so far 3800 human subjects). Turns out, range adaptation is all the rage!
On Tuesday, meet me at poster A105 to learn about a huge value-learning dataset we are putting together (so far 3800 human subjects). Turns out, range adaptation is all the rage!