simonroschmann.bsky.social
@simonroschmann.bsky.social
PhD Student @eml-munich.bsky.social @tum.de @www.helmholtz-munich.de‬.
Passionate about ML research.
This project was a collaboration between @eml-munich.bsky.social and Huawei Paris Noah’s Ark Lab. Thank you to my collaborators @qbouniot.bsky.social, Vasilii Feofanov, Ievgen Redko, and particularly to my advisor @zeynepakata.bsky.social for guiding me through my first PhD project!
July 3, 2025 at 7:59 AM
TiViT is on par with TSFMs (Mantis, Moment) on the UEA benchmark and significantly outperforms them on the UCR benchmark. The representations of TiViT and TSFMs are complementary; their combination yields SOTA classification results among foundation models.
July 3, 2025 at 7:59 AM
We further explore the structure of TiViT representations and find that intermediate layers with high intrinsic dimension are the most effective for time series classification.
July 3, 2025 at 7:59 AM
Time Series Transformers typically rely on 1D patching. We show theoretically that the 2D patching applied in TiViT can increase the number of label-relevant tokens and reduce the sample complexity.
July 3, 2025 at 7:59 AM
Our Time Vision Transformer (TiViT) converts a time series into a grayscale image, applies 2D patching, and utilizes a pretrained frozen ViT for feature extraction. We average the representations from a specific hidden layer and only train a linear classifier.
July 3, 2025 at 7:59 AM