#BionicVision #Blindness #LowVision #VisionScience #CompNeuro #NeuroTech #NeuroAI
If you try to predict perception from stimulation parameters alone, you’re basically at chance.
But if you use neural responses, suddenly you can decode detection, brightness, and color with high accuracy.
If you try to predict perception from stimulation parameters alone, you’re basically at chance.
But if you use neural responses, suddenly you can decode detection, brightness, and color with high accuracy.
Yes ... but control breaks down the farther you stray from the brain’s natural manifold.
Still, our methods required lower currents and evoked more stable percepts.
Yes ... but control breaks down the farther you stray from the brain’s natural manifold.
Still, our methods required lower currents and evoked more stable percepts.
1️⃣ Gradient-based optimizer (precise, but slow)
2️⃣ Inverse neural net (fast, real-time)
Both shaped neural responses far better than conventional 1-to-1 mapping
1️⃣ Gradient-based optimizer (precise, but slow)
2️⃣ Inverse neural net (fast, real-time)
Both shaped neural responses far better than conventional 1-to-1 mapping
💡 Key insight: accounting for pre-stimulus activity drastically improved predictions across sessions.
This makes the model robust to day-to-day drift.
💡 Key insight: accounting for pre-stimulus activity drastically improved predictions across sessions.
This makes the model robust to day-to-day drift.
So we turned to 🧠 data: >6,000 stim-response pairs over 4 months in a blind volunteer, letting a model learn the rules from the data.
So we turned to 🧠 data: >6,000 stim-response pairs over 4 months in a blind volunteer, letting a model learn the rules from the data.
TL;DR Deep learning lets us synthesize efficient stimulation patterns that reliably evoke percepts, outperforming conventional calibration.
www.biorxiv.org/content/10.1...
TL;DR Deep learning lets us synthesize efficient stimulation patterns that reliably evoke percepts, outperforming conventional calibration.
www.biorxiv.org/content/10.1...
Grateful to my mentors, students, and funders who shaped this journey and to @ucsantabarbara.bsky.social for giving the Bionic Vision Lab a home!
Full post: www.linkedin.com/posts/michae...
Grateful to my mentors, students, and funders who shaped this journey and to @ucsantabarbara.bsky.social for giving the Bionic Vision Lab a home!
Full post: www.linkedin.com/posts/michae...
🕥 Sun 10:45pm · Talk Room 1
🧠 www.visionsciences.org/presentation...
🕥 Sun 10:45pm · Talk Room 1
🧠 www.visionsciences.org/presentation...
First up is PhD Candidate Byron A. Johnson:
Fri, 4:30pm, Talk Room 1: Differential Effects of Peripheral and Central Vision Loss on Scene Perception and Eye Movement Patterns
www.visionsciences.org/presentation...
First up is PhD Candidate Byron A. Johnson:
Fri, 4:30pm, Talk Room 1: Differential Effects of Peripheral and Central Vision Loss on Scene Perception and Eye Movement Patterns
www.visionsciences.org/presentation...
Some joyful moments from the Plous Award Ceremony: Honored to give the lecture, receive the framed award & celebrate with the people who made it all possible!
@bionicvisionlab.org @ucsantabarbara.bsky.social
Some joyful moments from the Plous Award Ceremony: Honored to give the lecture, receive the framed award & celebrate with the people who made it all possible!
@bionicvisionlab.org @ucsantabarbara.bsky.social
https://youtu.be/okSYmGrPBDE?feature=shared
We simulate #BionicEye vision using neuro-inspired #AI models embedded in #VR, which are computationally heavy. But with Sentis we can run them in real-time to update based on head...
https://youtu.be/okSYmGrPBDE?feature=shared
We simulate #BionicEye vision using neuro-inspired #AI models embedded in #VR, which are computationally heavy. But with Sentis we can run them in real-time to update based on head...
Except they "borrowed" a slide of mine and @NLM_NIH footage w/ @byebyedarkness, crediting it to SecondSight & calling it Nintendo-like graphics as @elonmusk said🤣@neuralink must deal with this all the...
Except they "borrowed" a slide of mine and @NLM_NIH footage w/ @byebyedarkness, crediting it to SecondSight & calling it Nintendo-like graphics as @elonmusk said🤣@neuralink must deal with this all the...
- Colville-Dearborn Memorial Award, College of Letters & Science
- Morgan Award for Research Promise, @UCSBpsych
Tori's a force of nature & we'll miss her dearly!! Can't wait to see all the things she'll achieve in @mollycrockett's lab @PsychPrinceton
- Colville-Dearborn Memorial Award, College of Letters & Science
- Morgan Award for Research Promise, @UCSBpsych
Tori's a force of nature & we'll miss her dearly!! Can't wait to see all the things she'll achieve in @mollycrockett's lab @PsychPrinceton
@galenpogo: Outstanding TA Award, @ucsbcs
Galen is working on AI alignment & is currently interning with @RealityLabs
@galenpogo: Outstanding TA Award, @ucsbcs
Galen is working on AI alignment & is currently interning with @RealityLabs
- @JustinKasowski (DYNS): Founder & CEO at RealmVR
- Aiwen Xu (@ucsbcs): Joining @SnowflakeDB this summer
It's been a pleasure working with you, and I can't wait to see your future achievements!
- @JustinKasowski (DYNS): Founder & CEO at RealmVR
- Aiwen Xu (@ucsbcs): Joining @SnowflakeDB this summer
It's been a pleasure working with you, and I can't wait to see your future achievements!
How did it come to the acquisition of @PixiumVision by Science Corporation? What will happen to the existing PRIMA...
How did it come to the acquisition of @PixiumVision by Science Corporation? What will happen to the existing PRIMA...
How did it get to this? What will happen next?
Exclusive interview with Dr. Brian Burg (former Director of R&D at...
How did it get to this? What will happen next?
Exclusive interview with Dr. Brian Burg (former Director of R&D at...
Come help us figure out how the mouse brain processes visual information during active exploration to support navigation: bionicvisionlab.org/join
#PhDone #compneuro #neuroscience #AcademicChatter
Come help us figure out how the mouse brain processes visual information during active exploration to support navigation: bionicvisionlab.org/join
#PhDone #compneuro #neuroscience #AcademicChatter
Come help us figure out how the mouse brain processes visual information during active exploration to support navigation:
https://bionicvisionlab.org/join
#PhDone #AcademicTwitter #AcademicChatter #neurotwitter
Come help us figure out how the mouse brain processes visual information during active exploration to support navigation:
https://bionicvisionlab.org/join
#PhDone #AcademicTwitter #AcademicChatter #neurotwitter
Hear from Prof. Gordon Legge @UMNews about his decorated career, shifting paradigms of vision science, and the potential & limitations of #BionicEye...
Hear from Prof. Gordon Legge @UMNews about his decorated career, shifting paradigms of vision science, and the potential & limitations of #BionicEye...
We found that only 25% of model neurons were purely visual - all others were driven by multiple behavioral variables!
We found that only 25% of model neurons were purely visual - all others were driven by multiple behavioral variables!