Josh McDermott
joshhmcdermott.bsky.social
Josh McDermott
@joshhmcdermott.bsky.social
Working to understand how humans and machines hear. Prof at MIT; director of Lab for Computational Audition. https://mcdermottlab.mit.edu/
Pinned
New pre-print from our lab, by Lakshmi Govindarajan with help from Sagarika Alavilli, introducing a new type of model for studying sensory uncertainty. www.biorxiv.org/content/10.1...
Here is a summary. (1/n)
Task-optimized models of sensory uncertainty reproduce human confidence judgments
Sensory input is often ambiguous, leading to uncertain interpretations of the external world. Estimates of perceptual uncertainty might be useful in guiding behavior, but it remains unclear whether hu...
www.biorxiv.org
New pre-print from our lab, by Lakshmi Govindarajan with help from Sagarika Alavilli, introducing a new type of model for studying sensory uncertainty. www.biorxiv.org/content/10.1...
Here is a summary. (1/n)
Task-optimized models of sensory uncertainty reproduce human confidence judgments
Sensory input is often ambiguous, leading to uncertain interpretations of the external world. Estimates of perceptual uncertainty might be useful in guiding behavior, but it remains unclear whether hu...
www.biorxiv.org
November 9, 2025 at 9:34 PM
Reposted by Josh McDermott
if you see this post, your actions are:
- if you have a spare buck, give it to Wikipedia, then repost this
- if you don't have a spare buck, just repost

your action is mandatory for the world's best source of information to survive
I’ve never donated to Wikipedia before but I set up a small monthly donation as a fuck you to the world’s richest psychopath.
Elon Musk takes aim at Wikipedia
Musk has denounced Wikipedia as "Wokepedia" on X and urged people not to donate to the platform.
www.newsweek.com
December 26, 2024 at 12:03 PM
Reposted by Josh McDermott
Excited that this work discovering cross-species signatures of stabilizing foot placement control is now out in PNAS!

pnas.org/doi/10.1073/...

@antoinecomite.bsky.social
October 21, 2025 at 9:39 PM
Reposted by Josh McDermott
Want to make publication-ready figures come straight from Python without having to do any manual editing? Are you fed up with axes labels being unreadable during your presentations? Follow this short tutorial including code examples! 👇🧵
October 16, 2025 at 8:26 AM
Reposted by Josh McDermott
Excited to share that I'm joining WashU in January as an Assistant Prof in Psych & Brain Sciences! 🧠✨!

I'm also recruiting grad students to start next September - come hang out with us! Details about our lab here: www.deckerlab.com

Reposts are very welcome! 🙌 Please help spread the word!
DeckerLab
www.deckerlab.com
October 1, 2025 at 6:30 PM
Reposted by Josh McDermott
Brown’s Department of Cognitive & Psychological Sciences is hiring a tenure-track Assistant Professor, working in the area of AI and the Mind (start July 1, 2026). Apply by Nov 8, 2025 👉 apply.interfolio.com/173939

#AI #CognitiveScience #AcademicJobs #BrownUniversity
Apply - Interfolio {{$ctrl.$state.data.pageTitle}} - Apply - Interfolio
apply.interfolio.com
September 23, 2025 at 5:51 PM
Reposted by Josh McDermott
Variance partitioning is used to quantify the overlap of two models. Over the years, I have found that this can be a very confusing and misleading concept. So we finally we decided to write a short blog to explain why.
@martinhebart.bsky.social @gallantlab.org
diedrichsenlab.org/BrainDataSci...
September 10, 2025 at 4:58 PM
Reposted by Josh McDermott
🔊New paper! Recomposer allows editing sound events within complex scenes based on textual descriptions and event roll representations. And we discuss the details that matter!

Work by the Sound Understanding folks
@GoogleDeepMind

arxiv.org/abs/2509.05256
Recomposer: Event-roll-guided generative audio editing
Editing complex real-world sound scenes is difficult because individual sound sources overlap in time. Generative models can fill-in missing or corrupted details based on their strong prior understand...
arxiv.org
September 11, 2025 at 7:38 PM
If you are attending the Kempner symposium I encourage you to check out @gelbanna.bsky.social 's poster on models and benchmarks of continuous speech perception. He has many interesting results.
At Frontiers in NeuroAI symposium @kempnerinstitute.bsky.social, I will be presenting a poster entitled "A Model of Continuous Phoneme Recognition Reveals the Role of Context in Human Speech Perception" (Poster #17).

Work done with @joshhmcdermott.bsky.social.

#NeuroAI2025

🧵1/4
June 5, 2025 at 1:30 AM
Reposted by Josh McDermott
How bad will it be? Catastrophic.

Proposed cuts to #NSF, #NIH, and #NASA will set the US R&D landscape back 25 yrs+, cause economic and job loss now, and undermine innovations to come.

But, this is the WH's *proposed* budget.

Speak up now before it is too late.

(inflation adjusted $-s below)
May 31, 2025 at 2:50 AM
Reposted by Josh McDermott
We are presenting our work “Discriminating image representations with principal distortions” at #ICLR2025 today (4/24) at 3pm! If you are interested in comparing model representations with other models or human perception, stop by poster #63. Highlights in 🧵
openreview.net/forum?id=ugX...
Discriminating image representations with principal distortions
Image representations (artificial or biological) are often compared in terms of their global geometric structure; however, representations with similar global structure can have strikingly...
openreview.net
April 24, 2025 at 5:13 AM
Reposted by Josh McDermott
My father-in-law, Jack Strominger, and I wrote a letter to the @wsj.com editor about the current threats to science due to Trump's funding freeze. Please repost! www.wsj.com/opinion/scie...
Opinion | Science Suffers With Trump’s Funding Freeze
America’s scientific enterprise demands reliable stewardship, not destabilizing political intervention.
www.wsj.com
April 21, 2025 at 6:21 PM
Reposted by Josh McDermott
Impressive work to stimulate only the M-cones of the retina and make green appear greener than ever.

Coming (not too) soon to a theater near you!

www.science.org/doi/10.1126/...
Novel color via stimulation of individual photoreceptors at population scale
Image display by cell-by-cell retina stimulation, enabling colors impossible to see under natural viewing.
www.science.org
April 19, 2025 at 1:06 PM
Reposted by Josh McDermott
I’m hiring a full-time lab tech for two years starting May/June. Strong coding skills required, ML a plus. Our research on the human brain uses fMRI, ANNs, intracranial recording, and behavior. A great stepping stone to grad school. Apply here:
careers.peopleclick.com/careerscp/cl...
......
Technical Associate I, Kanwisher Lab
MIT - Technical Associate I, Kanwisher Lab - Cambridge MA 02139
careers.peopleclick.com
March 26, 2025 at 3:09 PM
Reposted by Josh McDermott
Now accepting applications for the summer 2025 cohort: STEMM opportunities for college students with Hearing loss to Engage in Auditory Research (STEMM-HEAR)

www.stemm-hear.bme.jhu.edu
Home - STEMM-HEAR
www.stemm-hear.bme.jhu.edu
March 25, 2025 at 9:12 PM
Reposted by Josh McDermott
Applications are open for the 2025 Flatiron Institute Junior Theoretical Neuroscience Workshop! A two-day workshop 7/10-7/11 in NYC for PhD students and postdocs. All travel paid. Apply by April 14th.🧠🗽🧑‍🔬http://jtnworkshop2025.flatironinstitute.org/
@flatironinstitute.org @simonsfoundation.org
JTN - 2025
JTN - 2025
jtnworkshop2025.flatironinstitute.org
March 6, 2025 at 4:24 PM
Reposted by Josh McDermott
We are experiencing an assault on science unparalleled by anything I’ve seen in my life. It’s not one issue or another anymore, the entire institution is under attack by the most powerful individuals in the country.

This Friday, where will you be?

standupforscience2025.org
March 2, 2025 at 4:27 PM
If you are here at the last day of ARO, don’t miss Sagarika Alavilli’s talk on “Measuring and Modeling Multi-Source Environmental Sound Recognition”, happening at 9:45 in Ocean Ballroom 9 - 12.
February 26, 2025 at 1:37 PM
Two posters from our lab on deep auditory models, presented today at ARO:

T107 - “Modeling Normal and Impaired Hearing With Deep Neural Networks Optimized for Ecological Tasks” by Mark Saddler et al.

T138 - “Modeling Continuous Speech Perception Using Artificial Neural Networks” by Gasser Elbanna
February 25, 2025 at 3:53 PM
Two more posters from our lab are being presented today at ARO:

M133 - “Neural Network Models of Hearing Clarify Factors Limiting Cochlear Implant Outcomes” by Annesya Banerjee et al.

M166 - “Preferences for Loudness and Pitch Vary Across Cultures” by Malinda McPherson et al.
February 24, 2025 at 6:27 PM
If you are at ARO today, lots of stuff to see from our lab.

Posters:
SU181 - “Optimization Under Ecological Realism Reproduces Signatures of Human Speech Recognition” by Annika Magaro et al.

SU184 - “Texture Streaming in Auditory Scenes” by Jarrod Hicks
February 23, 2025 at 6:01 PM
If at ARO today, check out Lakshmi Govindarajan's poster on "Confidence in Sound Localization Reflects Calibrated Uncertainty Estimation". Number S147 if you are at the meeting.
February 22, 2025 at 6:24 PM
Reposted by Josh McDermott
Join us! Science Homecoming helps scientists reconnect with communities by writing about the importance of science funding in their hometown newspapers. We’ve mapped every small newspaper in the U.S. and provide resources to get you started. Help science get back home 🧪🔬🧬 🏠

sciencehomecoming.com
Science Homecoming
sciencehomecoming.com
February 18, 2025 at 5:12 PM