🔗 https://stojnicv.xyz
Work with @ryan-ramos.bsky.social, @gkordo.bsky.social, Yuta Nakashima, @gtolias.bsky.social, and @noagarciad.bsky.social
Work with @ryan-ramos.bsky.social, @gkordo.bsky.social, Yuta Nakashima, @gtolias.bsky.social, and @noagarciad.bsky.social
Work with @ryan-ramos.bsky.social, @gkordo.bsky.social, Yuta Nakashima, @gtolias.bsky.social, and @noagarciad.bsky.social.
Work with @ryan-ramos.bsky.social, @gkordo.bsky.social, Yuta Nakashima, @gtolias.bsky.social, and @noagarciad.bsky.social.
Processing and acquisition traces in visual encoders: What does CLIP know about your camera?
arxiv.org/abs/2508.10637
To be presented at #ICCV2025 (highlight). @iccv.bsky.social
Processing and acquisition traces in visual encoders: What does CLIP know about your camera?
arxiv.org/abs/2508.10637
To be presented at #ICCV2025 (highlight). @iccv.bsky.social
Here, we show kNN classification in a few cases, depending on whether the semantic positives and negatives share the same processing parameters as the test image.
Here, we show kNN classification in a few cases, depending on whether the semantic positives and negatives share the same processing parameters as the test image.