F. Güney
fguney.bsky.social
F. Güney
@fguney.bsky.social
research on computer vision, teaching, and movies.
tweets in TR, EN
thanks a lot!! any resources (slides, tutorials, etc.) you recommend?
February 19, 2025 at 12:17 PM
well maybe there is a wider diamond hall ahead, exploration.
February 8, 2025 at 9:27 AM
to me, the surprising part is the involvement of Elon Musk. I cannot stop thinking "I could be working for that man."
February 4, 2025 at 1:57 PM
We’ve just released the code and pre-trained model: github.com/gorkaydemir/...

Also, check out our paper: arxiv.org/abs/2501.18487

(7/7)
GitHub - gorkaydemir/track_on: [ICLR 2025] Track-On: Transformer-based Online Point Tracking with Memory
[ICLR 2025] Track-On: Transformer-based Online Point Tracking with Memory - gorkaydemir/track_on
github.com
February 3, 2025 at 8:25 AM
We introduce a flexible memory extension mechanism, allowing users to adapt based on FPS, frame count, and other data characteristics. Our model is fast and lightweight, requiring minimal GPU memory. (6/7)
February 3, 2025 at 8:25 AM
Even without bidirectional information flow (unlike offline models), our approach achieves state-of-the-art results among comparable online and offline tracking models across multiple datasets. (5/7)
February 3, 2025 at 8:25 AM
Unlike traditional methods relying on full temporal modeling, our model operates causally—processing frames without future information. We introduce two memory modules: (i) Spatial Memory, addressing feature drift; (ii) Context Memory, storing full tracking history. (4/7)
February 3, 2025 at 8:25 AM
We simply process points as queries in a transformer decoder. Instead of regressing coordinates (as in dominant methods), we treat tracking as a classification problem, selecting the most likely patch per query and refining with local offsets. (3/7)
February 3, 2025 at 8:25 AM
Unlike prior work focused on offline point tracking, we target online tracking on a frame-by-frame basis, making it ideal for real-time, streaming scenarios. At the core of our approach is a simple yet effective transformer-based model. (2/7)
February 3, 2025 at 8:25 AM
I fully support your decision 🤗
January 29, 2025 at 3:12 PM