Disabled Voices Channel
banner
disabledvoices.newsmast.community.ap.brid.gy
Disabled Voices Channel
@disabledvoices.newsmast.community.ap.brid.gy
Welcome to the Newsmast Disabled Voices Channel. A curated feed of posts from the Fediverse, handmade by @newsmast@newmast.social, and broadcasting […]

[bridged from https://newsmast.community/@disabledvoices on the fediverse by https://fed.brid.gy/ ]
Reposted by Disabled Voices Channel
If you're getting into the groove for the snow storm of doom, I'm rocking the mighty Mushroom FM at 9 PM Eastern with two hours of music where almost every song is about snow. Yikes!
January 24, 2026 at 1:17 AM
Reposted by Disabled Voices Channel
Anyone else use cardio as add-on for POTS? Couldn't tolerate beta-blockers so figured out hiking for conditioning. For me, it helps on workdays when I need to stand a lot. Not perfect, but it helps.

#ChronicIllness #POTS #Hiking
January 23, 2026 at 8:24 PM
Reposted by Disabled Voices Channel
The vOICe for Android v2.78 released https://play.google.com/store/apps/details?id=vOICe.vOICe Fix for non-functioning username and password handling in MJPEG streaming URL. Minor tweaks to reduce ANR errors. #bci #neurotech

Total number of devices supported by this release according to Google […]
Original post on mas.to
mas.to
January 23, 2026 at 11:51 AM
Reposted by Disabled Voices Channel
IEEE Spectrum: These hearing aids will tune in to your brain https://spectrum.ieee.org/hearing-aids-biosignals "Tracking brain waves and eye signals could cut through the noise"; #bci #neurotech
These Hearing Aids Will Tune in to Your Brain
**Imagine you’re at a** bustling dinner party filled with laughter, music, and clinking silverware. You’re trying to follow a conversation across the table, but every word feels like it’s wrapped in noise. For most people, these types of party scenarios, where it’s difficult to filter out extraneous sounds and focus on a single source, are an occasional annoyance. For millions with hearing loss, they’re a daily challenge—and not just in busy settings. Today’s hearing aids aren’t great at determining which sounds to amplify and which to ignore, and this often leaves users overwhelmed and fatigued. Even the routine act of conversing with a loved one during a car ride can be mentally draining, simply because the hum of the engine and road noises are magnified to create loud and constant background static that blurs speech. In recent years, modern hearing aids have made impressive strides. They can, for example, use a technology called adaptive beamforming to focus their microphones in the direction of a talker. Noise-reduction settings also help decrease background cacophony, and some devices even use machine-learning-based analysis, trained on uploaded data, to detect certain environments—for example a car or a party—and deploy custom settings. That’s why I was initially surprised to find out that today’s state-of-the-art hearing aids aren’t good enough. “It’s like my ears work but my brain is tired,” I remember one elderly man complaining, frustrated with the inadequacy of his cutting-edge noise-suppression hearing aids. At the time, I was a graduate student at the University of Texas at Dallas, surveying individuals with hearing loss. The man’s insight led me to a realization: Mental strain is an unaddressed frontier of hearing technology. But what if hearing aids were more than just amplifiers? What if they were listeners too? I envision a new generation of intelligent hearing aids that not only boost sound but also read the wearer’s brain waves and other key physiological markers, enabling them to react accordingly to improve hearing and counter fatigue. Until last spring, when I took time off to care for my child, I was a senior audio research scientist at Harman International, in Los Angeles. My work combined cognitive neuroscience, auditory prosthetics, and the processing of biosignals, which are measurable physiological cues that reflect our mental and physical state. I’m passionate about developing brain-computer interfaces (BCIs) and adaptive signal-processing systems that make life easier for people with hearing loss. And I’m not alone. A number of researchers and companies are working to create smart hearing aids, and it’s likely they’ll come on the market within a decade. Two technologies in particular are poised to revolutionize hearing aids, offering personalized, fatigue-free listening experiences: electroencephalography (EEG), which tracks brain activity, and pupillometry, which uses eye measurements to gauge cognitive effort. These approaches might even be used to improve consumer audio devices, transforming the way we listen everywhere. ## Aging Populations in a Noisy World More than 430 million people suffer from disabling hearing loss worldwide, including 34 million children, according to the World Health Organization. And the problem will likely get worse due to rising life expectancies and the fact that the world itself seems to be getting louder. By 2050, an estimated 2.5 billion people will suffer some degree of hearing loss and 700 million will require intervention. On top of that, as many as 1.4 billion of today’s young people—nearly half of those aged 12 to 34—could be at risk of permanent hearing loss from listening to audio devices too loud and for too long. Every year, close to a trillion dollars is lost globally due to unaddressed hearing loss, a trend that is also likely getting more pronounced. That doesn’t account for the significant emotional and physical toll on the hearing impaired, including isolation, loneliness, depression, shame, anxiety, sleep disturbances, and loss of balance. Flex-printed electrode arrays, such as these from the Fraunhofer Institute for Digital Media Technology, offer a comfortable option for collecting high-quality EEG signals. Leona Hofmann/Fraunhofer IDMT And yet, despite widespread availability, hearing aid adoption remains low. According to a 2024 study published in __The Lancet__ , only about 13 percent of Americans adults with hearing loss regularly wear hearing aids. Key reasons for this deficiency include discomfort, stigma, cost—and, crucially, frustration with the poor performance of hearing aids in noisy environments. Historically, hearing technology has come a long way. As early as the 13th century, people began using horns of cows and rams as “ear trumpets.” Commercial versions made of various materials, including brass and wood, came on the market in the early 19th century. (Beethoven, who famously began losing his hearing in his twenties, used variously shaped ear trumpets, some of which are now on display in a museum in Bonn, Germany.) But these contraptions were so bulky that users had to hold them with their hands or wear them within headbands. To avoid stigma, some even hid hearing aids inside furniture to mask their disability. In 1819, a special acoustic chair was designed for the king of Portugal, featuring arms ornately carved to look like open lion mouths, which helped transmit sound to the king’s ear via speaking tubes. Modern hearing aids came into being after the advent of electronics in the early 20th century. Early devices used vacuum tubes and then transistors to amplify sound, shrinking over time from bulky body-worn boxes to discreet units that fit behind or inside the ear. At their core, today’s hearing aids still work on the same principle: A microphone picks up sound, a processor amplifies and shapes it to match the user’s hearing loss, and a tiny speaker delivers the adjusted sound into the ear canal. Today’s best-in-class devices, like those from Oticon, Phonak, and Starkey, have pioneered increasingly advanced technologies, including the aforementioned beamforming microphones, frequency lowering to better pick up high-pitched sounds and voices, and machine learning to recognize and adapt to specific environments. For example, the device may reduce amplification in a quiet room to avoid escalating background hums or else increase amplification in a noisy café to make speech more intelligible.**** Advances in the AI technique of deep learning, which relies on artificial neural networks to automatically recognize patterns, also hold enormous promise. Using context-aware algorithms, this technology can, for example, be used to help distinguish between speech and noise, predict and suppress unwanted clamor in real time, and attempt to clean up speech that is muffled or distorted. The problem? As of right now, consumer systems respond only to external acoustic environments and not to the internal cognitive state of the listener—which means they act on imperfect and incomplete information. So, what if hearing aids were more empathetic? What if they could sense when the listener’s brain feels tired or overwhelmed and automatically use that feedback to deploy advanced features? ## Using EEG to Augment Hearing Aids When it comes to creating intelligent hearing aids, there are two main challenges. The first is building convenient, power-efficient wearable devices that accurately detect brain states. The second, perhaps more difficult step is decoding feedback from the brain and using that information to help hearing aids adapt in real time to the listener’s cognitive state and auditory experience. Let’s start with EEG. This century-old noninvasive technology uses electrodes placed on the scalp to measure the brain’s electrical activity through voltage fluctuations, which are recorded as “brain waves.” Brain-computer interfaces allow researchers to accurately determine a listener’s focus in multitalker environments. Here, professor Christopher Smalt works on an attention-decoding system at the MIT Lincoln Laboratory.MIT Lincoln Laboratory Clinically, EEG has long been applied for diagnosing epilepsy and sleep disorders, monitoring brain injuries, assessing hearing ability in infants and impaired individuals, and more. And while standard EEG requires conductive gel and bulky headsets, we now have versions that are far more portable and convenient. These breakthroughs have already allowed EEG to migrate from hospitals into the consumer tech spaces, driving everything from neurofeedback headbands to the BCIs in gaming and wellness apps that allow people to control devices with their minds. The cEEGrid project**** at Oldenburg University, in Germany, positions lightweight adhesive electrodes around the ear to create a low-profile version. In Denmark, Aarhus University’s Center for Ear-EEG also has an ear-based EEG system designed for comfort and portability. While the signal-to-noise ratio is slightly lower compared to head-worn EEG, these ear-based systems have proven sufficiently accurate for gauging attention, listening effort, hearing thresholds, and speech tracking in real time. For hearing aids, EEG technology can pick up brain-wave patterns that reveal how well a listener is following speech: When listeners are paying attention, their brain rhythms synchronize with the syllabic rhythms of discourse, essentially tracking the speaker’s cadence. By contrast, if the signal becomes weaker or less precise, it suggests the listener is struggling to comprehend and losing focus. During my own Ph.D. research, I observed firsthand how real-time brain-wave patterns, picked up by EEG, can reflect the quality of a listener’s speech cognition. For example, when participants successfully homed in on a single talker in a crowded room, their neural rhythms aligned nearly perfectly with that speaker’s voice. It was as if there were a brain-based spotlight on that speaker! But when background fracas grew louder or the listener’s attention drifted, those patterns waned, revealing stress in keeping up. Today, researchers at Oldenburg University, Aarhus University, and MIT are developing attention-decoding algorithms specifically for auditory applications. For example, Oldenburg’s cEEGrid technology has been used to successfully identify which of two speakers a listener is trying to hear. In a related study, researchers demonstrated that Ear-EEG can track the attended speech stream in multitalker environments. All of this could prove transformational in creating neuroadaptive hearing aids. If a listener’s EEG reveals a drop in speech tracking, the hearing aid could infer increased listening difficulty, even if ambient noise levels have remained constant. For example, if a hearing-impaired car driver can’t focus on a conversation due to mental fatigue caused by background noise, the hearing aid could switch on beamforming to better spotlight the passenger’s voice, as well as machine-learning settings to deploy sound canceling that blocks the din of the road. Of course, there are several hurdles to cross before commercialization becomes possible. For one thing, EEG-paired hearing aids will need to handle the fact that neural responses differ from person to person, which means they will likely need to be calibrated individually to capture each wearer’s unique brain-speech patterns. Additionally, EEG signals are themselves notoriously “noisy,” especially in real-world environments. Luckily, we already have algorithms and processing tools for cleaning and organizing these signals so computer models can search for key patterns that predict mental states, including attention drift and fatigue. Commercial versions of EEG-paired hearing aids will also need to be small and energy-efficient when it comes to signal processing and real-time computation. And getting them to work reliably, despite head movement and daily activity, will be no small feat. Importantly, companies will need to resolve ethical and regulatory considerations, such as data ownership. To me, these challenges seem surmountable, especially with technology progressing at a rapid clip. ## A Window to the Brain: Using Our Eyes to Hear Now let’s consider a second way of reading brain states: through the listener’s eyes. When a person has trouble hearing and starts feeling overwhelmed, the body reacts. Heart-rate variability diminishes, indicating stress, and sweating increases. Researchers are investigating how these types of autonomic nervous-system responses can be measured and used to create smart hearing aids. For the purposes of this article, I will focus on a response that seems especially promising—namely, pupil size. Pupillometry is the measurement of pupil size and how it changes in response to stimuli. We all know that pupils expand or contract depending on light brightness. As it turns out, pupil size is also an accurate means of evaluating attention, arousal, mental strain—and, crucially, listening effort. Pupil size is determined by both external stimuli, such as light, and internal stimuli, such as fatigue or excitement.Chris Philpot In recent years, studies at University College London**** and Leiden University**** have demonstrated that pupil dilation is consistently greater in hearing-impaired individuals when processing speech in noisy conditions. Research has also shown pupillometry to be a sensitive, objective correlate of speech intelligibility and mental strain. It could therefore offer a**** feedback**** mechanism**** for user-aware hearing aids that dynamically adjust amplification strategies, directional focus, or noise reduction based not just on the acoustic environment but on how hard the user is working to comprehend speech. While more straightforward than EEG, pupillometry presents its own engineering challenges. Unlike with ears, which can be assessed from behind, pupillometry requires a direct line of sight to the pupil, necessitating a stable, front-facing camera-to-eye configuration—which isn’t easy to achieve when a wearer is moving around in real-world settings. On top of that, most pupil-tracking systems require infrared illumination and high-resolution optical cameras, which are too bulky and power intensive for the tiny housings of in-ear or behind-the-ear hearing aids. All this makes it unlikely that standalone hearing aids will include pupil-tracking hardware in the near future. A more viable approach may be pairing hearing aids with smart glasses or other wearables that contain the necessary eye-tracking hardware. Products from companies like Tobii**** and Pupil Labs already offer real-time pupillometry via lightweight headgear for use in research, behavioral analysis, and assistive technology for people with medical conditions that limit movement but leave eye control intact. Apple’s Vision Pro**** and other augmented reality or virtual reality platforms also include built-in eye-tracking sensors that could support pupillometry-driven adaptations for audio content. Smart glasses that measure pupil size, such as these made by Tobii, could help determine listening strain. Tobii Once pupil data is acquired, the next step will be real-time interpretation. Here, again, is where machine learning can use large datasets to detect patterns signifying increased cognitive load or attentional shifts. For instance, if a listener’s pupils dilate unnaturally during a conversation, signifying strain, the hearing aid could automatically engage a more aggressive noise suppression mode or narrow its directional microphone beam. These types of systems can also learn from contextual features, such as time of day or prior environments, to continuously refine their response strategies. While no commercial hearing aid currently integrates pupillometry, adjacent industries are moving quickly.****Emteq Labs is developing “emotion-sensing” glasses that combine facial and eye tracking, along with pupil measurement, to do things like evaluate mental health and capture consumer insights. Ethical controversies aside—just imagine what dystopian governments might do with emotion-reading eyewear!—such devices show that it’s feasible to embed biosignal monitoring in consumer-grade smart glasses. ## A Future with Empathetic Hearing Aids Back at the dinner party, it remains nearly impossible to participate in conversation. “Why even bother going out?” some ask. But that will soon change. We’re at the cusp of a paradigm shift in auditory technology, from device-centered to user-centered innovation. In the next five years, we may see hybrid solutions where EEG-enabled earbuds work in tandem with smart glasses. In 10 years, fully integrated biosignal-driven hearing aids could become the standard. And in 50? Perhaps audio systems will evolve into cognitive companions, devices that adjust, advise, and align with our mental state. Personalizing hearing-assistance technology isn’t just about improving clarity; it’s also about easing mental fatigue, reducing social isolation, and empowering people to engage confidently with the world. Ultimately, it’s about restoring dignity, connection, and joy.
spectrum.ieee.org
January 22, 2026 at 8:49 PM
Reposted by Disabled Voices Channel
How I’m imagining this discussion went:
“Wouldn’t it be cool if, instead of icons, we labeled the toilets with words?”
“Oh no, not that word, we can’t ever write that word”
#disability
January 22, 2026 at 7:36 PM
Reposted by Disabled Voices Channel
Neuromodulation through multisensory stimulation for visual field deficits in the subacute stage of disease (MULTICAMPO) https://clinicaltrials.gov/study/NCT07358832 using "spatio-temporally congruent, cross-modal audio-visual stimuli", #TDCS
January 22, 2026 at 6:04 PM
Reposted by Disabled Voices Channel
Psilocybin triggers an activity-dependent rewiring of large-scale cortical networks (in mice) https://www.cell.com/cell/fulltext/S0092-8674(25)01305-4
January 22, 2026 at 5:52 PM
Reposted by Disabled Voices Channel
#guidetouch: An obstacle avoidance device with tactile feedback for visually impaired https://arxiv.org/html/2601.13813v2
January 22, 2026 at 3:18 PM
Reposted by Disabled Voices Channel
Vivani subsidiary Cortigent to present Orion Visual Cortical Prosthesis System at NANS 2026, January 23, 2026 https://finance.yahoo.com/news/vivani-subsidiary-cortigent-present-orion-120000709.html 60 cortical surface electrodes, based on Argus II retinal implant tech; $CRGT #bci #neurotech […]
Original post on mas.to
mas.to
January 22, 2026 at 3:16 PM
Reposted by Disabled Voices Channel
Forbes: Reversing blindness with electronic retinas https://www.forbes.com/sites/williamhaseltine/2026/01/21/reversing-blindness-with-electronic-retinas/ "PRIMA does not cure macular degeneration or restore normal sight. Many people using the device have trouble seeing differences in shades […]
Original post on mas.to
mas.to
January 22, 2026 at 11:44 AM
Reposted by Disabled Voices Channel
@FastSM Wondering if there is already, or could be, a way for Mona and Fast SM to stay in sync? It does appear that Fast SM knows the last toot I read in Mona, but after doing some reading on Fast SM, it seems like Mona isn't aware of my new place.
January 22, 2026 at 10:49 AM
Reposted by Disabled Voices Channel
January 22, 2026 at 10:49 AM
Reposted by Disabled Voices Channel
Sound frequency predicts the bodily location of auditory-induced tactile sensations in synesthetic and ordinary perception https://academic.oup.com/nc/article/2026/1/niaf064/8429902?login=false #synesthesia
January 22, 2026 at 10:40 AM
Reposted by Disabled Voices Channel
Wondering when Palantir will be used to police the unemployed and or the disabled…

“We can see here that you enjoyed an outing to such and such, previously you said you experienced extreme fatigue so we will dock your funding accordingly - NDIS Coordinator AI”

#NDIS
#Palantir
#Disability
January 22, 2026 at 2:26 AM
Reposted by Disabled Voices Channel
Plenty of people ask me if it snows in New Zealand. Yes, in some parts. Plenty of people from the northern hemisphere come to New Zealand during July and August to ski.
But mate! Based on all the forecasting and all the hype, it sounds like we are due for the most incredible snowpocalypse the […]
Original post on caneandable.social
caneandable.social
January 21, 2026 at 11:47 PM
Reposted by Disabled Voices Channel
RE: https://mastodon.social/@freedomscientific/115928983360170070

This was excellent. Liz and Rachel do a great job.
mastodon.social
January 21, 2026 at 11:16 PM
Reposted by Disabled Voices Channel
Disability remains largely invisible in Central Asia revealing how far modernization has yet to translate into inclusion participation and everyday opportunity https://ow.ly/87It50Y0NGO #CentralAsia #Disability #DisabilityAwareness #DisabilityRights
Disability Inclusion Is Emerging as Central Asia’s Next Social Frontier - The Times Of Central Asia
Disability inclusion will not define Central Asia’s future on its own. It does, however, offer a clear measure of whether modernization is reflected not only
ow.ly
January 21, 2026 at 5:34 PM
Reposted by Disabled Voices Channel
To ChatGPT: Draw a plot of effective visual resolution as a function of penetrating electrode count using the Neuralink Blindsight brain implant approach for up to 16,000 penetrating wire electrodes in V1. https://chatgpt.com/share/6970cd96-60f4-8004-8b95-5fb1f3d58cf8 […]

[Original post on mas.to]
January 21, 2026 at 1:28 PM
Reposted by Disabled Voices Channel
NDIS tool to determine support not tested on variety of disability types – including diverse autism, experts warn - So typical for things they launch […]
Original post on mastodon.social
mastodon.social
January 21, 2026 at 2:10 AM
Reposted by Disabled Voices Channel
Sydney NDIS provider director accused of $3.6m fraud as cash seized at home
https://www.abc.net.au/news/2026-01-20/ndis-provider-director-allegedly-defrauded-millions-from-claims/106249802

Here's a thought: instead of demonising people with disabilities and referring to them as a "drain" on […]
Original post on rants.au
rants.au
January 20, 2026 at 9:46 PM
Reposted by Disabled Voices Channel
Contrasting success in neurotechnology: Sensory substitution, brain–computer interfaces, and the limits of dimensional reduction […]

[Original post on mas.to]
January 20, 2026 at 7:27 PM
Reposted by Disabled Voices Channel
short burst migraine I just had left my right eye like this— bursted vessel or two in a weird curved line across my eye. It happened with extreme pain in my head and one eye. When people think a migraine is “just a headache” I think I’m going to show them this.

#migraines #chronicillness
January 20, 2026 at 7:18 PM
Reposted by Disabled Voices Channel
inspo porn—
in the mirror my ribs
at the end of chemo

#haiku #disability

kasasagi.blog/2026/01/21/i...
inspo
Visit the post for more.
kasasagi.blog
January 20, 2026 at 3:14 PM