Dylan R Muir
banner
dylanmuir.bsky.social
Dylan R Muir
@dylanmuir.bsky.social
Electronic Engineer turned Neuroscientist turned Neuromorphic low-power processing evangelist. No war, no hate. Maybe look out for people who have less…

https://dylan-muir.com
AI accelerationists frequently hype the amazing increases in productivity that AI will deliver, and the enormous benefits to society that will result.
August 13, 2025 at 5:25 AM
It’s such a privilege to set foot on this ancient land. These rocks have seen tens of thousands of years of First Nations communities — “Australia” is something brand new, in comparison, and we have the moral obligation to listen and learn, and correct our mistakes.
January 25, 2025 at 11:47 PM
As a result, we use *much* less implementation resources for beamforming than standard approaches. Using an SNN means we are also very power efficient, while still achieving state-of-the-art accuracy for SSNs, comparable with standard super-resolution methods such as MUSIC [3]!
December 5, 2024 at 6:47 AM
By using the Hilbert Transform we developed a single beamforming approach that works well in the narrowband case, and can use all frequencies of a wideband signal to work well in the wideband case!
December 5, 2024 at 6:47 AM
We took a different approach, designed for arrays with many microphones (>2). We start with a construct called the Hilbert Transform to estimate the phase of each signals and to encode the phases as spikes. We then use a beamforming method to estimate the source direction.
December 5, 2024 at 6:47 AM
Most SNN implementations of sound source localization take this approach, using the precise differences in spike times generated by a single-frequency sound at two microphones to estimate the location of an audio source.
December 5, 2024 at 6:47 AM
Mammals use the fact that audio sources from different directions lead to very precise differences in arrival time between our two ears — known as inter-aural time differences (ITDs). ITDs are encoded by the differences in neuronal spike times produced by our cochleas. [2]
December 5, 2024 at 6:47 AM
We've built a new system for sound source localization, based on spiking neural networks (SNNs), that sets a new state-of-the-art for SNN implementations, is extremely power efficient, and even matches the accuracy of standard DSP-based approaches! [1] arxiv.org/abs/2402.11748
December 5, 2024 at 6:47 AM
Sound source localization is an important part of dealing with audio. People use it to help pay attention to someone talking to us in a noisy environment. Smart home speakers use it to identify when someone is speaking, to focus on their voice and reject the background noise.
December 5, 2024 at 6:47 AM
#Benchmarking is crucial for #Neuromorphic #processors to reach mainstream commercial acceptance. Most NM benchmarks are one-off, and don't compare results against commodity hardware. The Neurobench project is trying to fix that!
November 28, 2024 at 2:25 AM
Christmas came early! Probably the densest collection of ultra-low-power Neuromorphic sensory processors on this side of the country 😂

From left to right: XyloAudio 3 (brand new!); stand-alone Speck modules; a Speck HID module (unreleased!); Speck dev kit; XyloIMU dev kit and XyloAudio 2.
November 26, 2024 at 12:51 AM
Hi! 👋 I am working to make #SpikingNeuralNetworks the next big thing for #MachineLearning. Currently I'm at SynSense, focussed on applications for #SNNs, as well as toolchains to enable ML and SW Engineers to use our #Neuromorphic technology for low-power sensing and processing.
November 18, 2024 at 2:39 AM