Klemen Kotar
klemenkotar.bsky.social
Klemen Kotar
@klemenkotar.bsky.social
CS PhD Candidate at Stanford NeuroAI Lab
Reposted by Klemen Kotar
Humans largely learn language through speech. In contrast, most LLMs learn from pre-tokenized text.

In our #Interspeech2025 paper, we introduce AuriStream: a simple, causal model that learns phoneme, word & semantic information from speech.

Poster P6, tomorrow (Aug 19) at 1:30 pm, Foyer 2.2!
August 19, 2025 at 1:12 AM
Reposted by Klemen Kotar
What are the organizing dimensions of language processing?

We show that voxel responses during comprehension are organized along 2 main axes: processing difficulty & meaning abstractness—revealing an interpretable, topographic representational basis for language processing shared across individuals
May 23, 2025 at 5:00 PM
Reposted by Klemen Kotar
Sadly couldn’t make it to ICLR Re-Align, but check out @klemenkotar.bsky.social and my prelim work on ‘model connectomes’—sparse initializations derived across LLM generations to enable efficient learning in low-data regimes, loosely inspired by evolution and lifetime learning.

shorturl.at/PNXXW
April 28, 2025 at 8:54 PM