Ömer Şahin Taş
banner
omersahintas.bsky.social
Ömer Şahin Taş
@omersahintas.bsky.social
Research Scientist & Manager at KIT & FZI
We also find that activation functions like JumpReLU and layers such as convolutional or MLPMixer offer better interpretability than Koopman autoencoders.

4/4
April 24, 2025 at 1:35 AM
Sparse autoencoders improve the linearity of control vectors, which we realize with virtually no runtime cost -- just a scale-and-add.

👇 3/4
April 24, 2025 at 1:34 AM
Latent-space regularities appear in motion embeddings of transformer models, and this enables vector-arithmetic operations including controlling representations.

👇 2/4
April 24, 2025 at 1:34 AM
Latent-space regularities appear in motion embeddings of transformer models, and this enables vector-arithmetic operations including controlling representations.

👇 2/4
April 24, 2025 at 1:11 AM