In our #Interspeech2025 paper, we introduce AuriStream: a simple, causal model that learns phoneme, word & semantic information from speech.
Poster P6, tomorrow (Aug 19) at 1:30 pm, Foyer 2.2!
In our #Interspeech2025 paper, we introduce AuriStream: a simple, causal model that learns phoneme, word & semantic information from speech.
Poster P6, tomorrow (Aug 19) at 1:30 pm, Foyer 2.2!
Paper: www.isca-archive.org/interspeech_...
Website: tukoresearch.github.io/auristream-s... (with audio examples)
HuggingFace: huggingface.co/TuKoResearch...
Paper: www.isca-archive.org/interspeech_...
Website: tukoresearch.github.io/auristream-s... (with audio examples)
HuggingFace: huggingface.co/TuKoResearch...
We show that voxel responses during comprehension are organized along 2 main axes: processing difficulty & meaning abstractness—revealing an interpretable, topographic representational basis for language processing shared across individuals
We show that voxel responses during comprehension are organized along 2 main axes: processing difficulty & meaning abstractness—revealing an interpretable, topographic representational basis for language processing shared across individuals
shorturl.at/PNXXW
shorturl.at/PNXXW