Yankun Wu
yankunwu.bsky.social
Yankun Wu
@yankunwu.bsky.social
PhD student at Osaka University. Research interest: computer vision, fairness, art.
This is very disappointing and should not happen. Decisions on the paper or workshop proposal should be based only on the submission itself. It is unacceptable to reject the paper due to the "lack of capacity" of the conference (or some other reasons that we never know).
We are one of the unfortunate papers that didn't make the neurips cut-off despite being originally accepted. Second submission this year that gets rejected from #neurips2025 without any reasonable explanation (the other one was a workshop proposal).
NeurIPS has decided to do what ICLR did: As a SAC I received the message 👇 This is wrong! If the review process cannot handle so many papers, the conference needs yo split instead of arbitrarily rejecting 400 papers.
September 19, 2025 at 4:15 AM
Reposted by Yankun Wu
Have you ever asked yourself how much your favorite vision model knows about image capture parameters (e.g., the amount of JPEG compression, the camera model, etc.)? Furthermore, could these parameters influence its semantic recognition abilities?
August 18, 2025 at 10:48 AM
🚨Deadline extension!!

The 2nd CEGIS workshop on visual generative models evaluation at #ICCV2025 #ICCV @iccv.bsky.social

🌴 New deadline: July 2nd 2025 (11:59 pm, Pacific Time)
📝 Submission site: cmt3.research.microsoft.com/CEGIS2025
🏁 Check details at: sites.google.com/view/cegis-w...
June 26, 2025 at 2:24 PM
Reposted by Yankun Wu
🚨 Deadline Extension

Instance-Level Recognition and Generation (ILR+G) Workshop at ICCV2025 @iccv.bsky.social

📅 new deadline: June 26, 2025 (23:59 AoE)
📄 paper submission: cmt3.research.microsoft.com/ILRnG2025
🌐 ILR+G website: ilr-workshop.github.io/ICCVW2025/

#ICCV2025 #ComputerVision #AI
June 22, 2025 at 11:01 AM
Reposted by Yankun Wu
#ECCV2024 Workshop Proceedings are now available which includes the VISART papers on #computervision, #arthistory and #culturalheritage link.springer.com/book/10.1007...
Computer Vision – ECCV 2024 Workshops
The ECCV 2024 Workshops' proceedings deal with up to date topics in research and applications in computer vision.
link.springer.com
June 23, 2025 at 1:41 PM
Reposted by Yankun Wu
Are you at @cvprconference.bsky.social #CVPR2025 ? Come and check out LPOSS.

We show how can graph-based label propagation be used to improve weak, patch-level predictions from VLMs for open-vocabulary semantic segmentation.

📅 June 13, 2025, 16:00 – 18:00 CDT
📍 Location: ExHall D, Poster #421
June 13, 2025 at 12:03 PM
Reposted by Yankun Wu
VRG is presenting 8 papers at #CVPR2025. You can find me and collaborators at the following 4 posters:

Fri 10:30-12:30 A Dataset for Semantic Segmentation in the Presence of Unknowns
Fri 16:00-18:00 LOCORE: Image Re-ranking with Long-Context Sequence Modeling
June 12, 2025 at 11:18 PM
I finished my PhD defense! Thank my dearest supervisors Prof. @noagarciad.bsky.social and Prof. Yuta Nakashima for their consistent support during this journey. I am grateful for being their student ✨ Also thank Prof. @gtolias.bsky.social for his support during our collaboration! 💪
June 14, 2025 at 1:46 AM
Reposted by Yankun Wu
Are you at @cvprconference.bsky.social? Come by our poster!
📅 Sat 14/6, 10:30-12:30
📍 Poster #395, ExHall D
June 13, 2025 at 5:09 AM
🚨 Call for papers!
The 2nd CEGIS workshop on visual generative models evaluation is back at #ICCV2025!!

🌴 Deadline: June 26th 2025
📝 Submission site: cmt3.research.microsoft.com/CEGIS2025
🏁 Check details at: sites.google.com/view/cegis-w...

See you in Honolulu!
The 2nd CEGIS workshop on visual generative models evaluation is back at #ICCV2025!!

Submit your contributions:
- Deadline: June 26th 2025
- Notification: July 10th, 2025
- Camera-ready: August 18th, 2025

See you in Honolulu!

sites.google.com/view/cegis-w...

@iccv.bsky.social
cegis
2nd workshop on critical evaluation of generative models and their impact on society 19 or 20 October 2025 at ICCV 2025, Honolulu, Hawaii
sites.google.com
June 2, 2025 at 1:24 PM
Reposted by Yankun Wu
Call for Papers update - ILR+G workshop @iccv.bsky.social

We will now feature a single submission track with new submission dates.

📅 New submission deadline: June 21, 2025
🔗 Submit here: cmt3.research.microsoft.com/ILRnG2025
🌐 More details: ilr-workshop.github.io/ICCVW2025/

#ICCV2025
May 24, 2025 at 8:27 AM
Reposted by Yankun Wu
The instance-level recognition workshop is back! This year also including instance level generation.

Submit your in-proceedings papers by June 7, or out-of-proceedings by June 30.

See you in Honolulu! 🏝️

#iccv2025 @iccv.bsky.social
🚨 Call for Papers!

7th Instance-Level Recognition and Generation (ILR+G) Workshop at @iccv.bsky.social

📍 Honolulu, Hawaii 🌺
📅 October 19–20, 2025
🌐 ilr-workshop.github.io/ICCVW2025/

in-proceedings deadline: June 7
out-of-proceedings deadline: June 30

#ICCV2025
ILR+G 2025
The Official Site of ICCV 2025 Workshop, Instance-Level Recognition and Generation Workshop
ilr-workshop.github.io
May 7, 2025 at 9:32 AM
Reposted by Yankun Wu
Gender inequality in Japanese academia
www.nature.com/articles/s44...
www.nature.com
April 10, 2025 at 8:56 AM
Reposted by Yankun Wu
ILIAS is a large-scale test dataset for evaluation on Instance-Level Image retrieval At Scale. It is designed to support future research in image-to-image and text-to-image retrieval for particular objects and serves as a benchmark for evaluating foundation models and retrieval techniques.
February 27, 2025 at 2:48 PM
Reposted by Yankun Wu
For PhD and MSc students interested in a research visit to Prague/VRG in 2025: we're open to hosting short-term collaborations or internships on a range of computer vision topics. If this sounds exciting, reach out by e-mail! We'd love to discuss potential projects. Some examples 🧵
#Internship #CV
February 12, 2025 at 8:26 AM
Reposted by Yankun Wu
This work includes a reality check for single-source domain generalization. Were prior method tuned properly on the validation set? Possible not, because an appropriate validation set did not exist until now. #WACV2025 paper.
1/n
Bear with me for a moment: Imagine you train a model to classify photos, but you need it to generalize to paintings 🎨.

You only have data from photos, and the target domain is inaccessible due to cost or legal rights.

You train many models and you validate on the source domain.
December 16, 2024 at 3:46 PM
Reposted by Yankun Wu
Excited to present UDON at NeurIPS '24 tomorrow (Thursday 12/12)! If you are interested in a scalable training method for multi-domain image embeddings, come to poster #1410 in the East Exhibit Hall A-C of the Vancouver Convention Center from 11 am to 2 pm (PST) to discuss!
December 12, 2024 at 1:25 AM
Reposted by Yankun Wu
Beyond fairness in computer vision or why addressing bias as a technical problem is not enough

by @timnitgebru.bsky.social and Remi Denton
now publishers - Beyond Fairness in Computer Vision: A Holistic Approach to Mitigating Harms and Fostering Community-Rooted Computer Vision Research
Publishers of Foundations and Trends, making research accessible
www.nowpublishers.com
December 8, 2024 at 5:31 AM
Reposted by Yankun Wu
My first PhD paper at Osaka University! Very very very thankful for my wonderful collaborators and advisors!
What better time to announce a new paper than during NeurIPS and ACCV?

happy happy happy to introduce NADA, our latest work on object detection in art! 🎨

with amazing collaborators:
@patrick-ramos.bsky.social, @nicaogr.bsky.social, Selina Khan, Yuta Nakashima
December 10, 2024 at 8:18 AM
Reposted by Yankun Wu
Introducing a new composed (image + text query) image retrieval method that enables domain conversion—style, context, or background—of objects.
1/ 🎉 Excited to share our work, "Composed Image Retrieval for Training-Free Domain Conversion", accepted at WACV 2025! 🚀
December 6, 2024 at 10:56 AM
Reposted by Yankun Wu
A must-read post on bias and why training data isn't the sole culprit
most people want a quick and simple answer to why AI systems encode/exacerbate societal and historical bias/injustice and due to the reductive but common thinking of "bias in, bias out," the obvious culprit often is training data but this is not entirely true

1/
November 25, 2024 at 4:28 AM