Callan Alexander
banner
callanalexander.bsky.social
Callan Alexander
@callanalexander.bsky.social
PhD Candidate at QUT, Threatened Species Technical Coordinator at BirdLife Australia.

Bioacoustics & Machine Learning

Music: www.clnmusic.com/about
thanks @ecologygrant.bsky.social! Sorry I missed this post, have been in the depths of PhD write up and neglected my Bluesky account.
September 13, 2025 at 10:52 AM
I’m hoping to look at translation to other species soon. There are some tricky bits when dealing with overlapping calls and complex song types. I think that there are ways to get around that though, but how well it performs ultimately may vary depending on the species and region. 🤷
June 9, 2025 at 2:07 AM
I guess at this stage I mostly mean that you can apply this to huge acoustic datasets and collect note annotations at very large ‘scales’ without requiring much more effort.

That being said, I think it will work well on other species with some tweaking!
June 9, 2025 at 2:04 AM
This approach is highly scaleable, and can be used to rapidly 'harvest' individual note annotations from large acoustic datasets. This allows for further investigation of song structure, geographic call variation and potentially vocal individuality.
June 2, 2025 at 10:27 PM
The clustering process is also very effective at searching 'underneath' a classifier threshold, removing false positives and finding vocalisations that may have been missed. The red circles in the figure show where the owl vocalisations have been split away from other sounds.
June 2, 2025 at 10:27 PM
In this paper we use a multi-stage machine learning pipeling (combining supervised and unsupervised methods) to reduce almost 3000 hours of environmental recordings into 10,116 annotations, of which 93% were correctly annotated individual notes of the target species.
June 2, 2025 at 10:27 PM
Thank you! Cool feature, would be keen to be on the contributors list!
January 23, 2025 at 5:17 AM
The pre-print is still a bit rough, so very open to any feedback during the review process! The next step is to try this on other species and calls. It may be a bit trickier with diurnal species or those with more complex vocalisations.
January 21, 2025 at 9:46 PM
Anyone who works in bioacoustics knows that tagging individual notes is extremely time consuming, so I think this is a really good way to 'harvest' note annotations from noisy environmental data. You can then use these annotations to compare geographic call variation or even vocal individuality.
January 21, 2025 at 9:46 PM
We then found that applying iterative unsupervised clustering (using UMAP and HDBSCAN) to the acoustic features was useful for separating false-positive and true detections, and in the end you are also left with individual note annotations that have essentially been automatically tagged!
January 21, 2025 at 9:46 PM
In this paper (currently in review) we apply a hybrid approach to automated detection of a cryptic threatened owl, the Powerful Owl (Ninox strenua).

Step one is a neural network to classify the vocalisations, after which we segment the output and extract acoustic features.
January 21, 2025 at 9:46 PM
Thanks so much for watching!
December 12, 2024 at 4:30 AM
Will be there, list is a great idea!
December 8, 2024 at 8:18 PM