Yoshitomo Matsubara
banner
yoshitomo-matsubara.net
Yoshitomo Matsubara
@yoshitomo-matsubara.net
Research Scientist at Yahoo! / ML OSS developer
PhD in Computer Science at UC Irvine
Research: ML, NLP, Computer Vision, Information Retrieval
Technical Chair: #CVPR2026 #ICCV2025 #WACV2026
Open Source/Science matters!
https://yoshitomo-matsubara.net
OR confirmed that it is a false claim

x.com/openreviewne...
x.com
November 28, 2025 at 1:45 AM
Did they report the issue via email? If it was X, I don't think they actively check that
November 28, 2025 at 12:01 AM
Personally, I understand your point and want to agree, face the fact, and leverage the data for improving future peer-review systems

But as an organizer side for multiple venues, I am not sure
November 27, 2025 at 11:59 PM
The attackers may have not taken their own actions seriously, but I am glad that the OR team took strong actions
November 27, 2025 at 11:19 PM
I would give my kid HF Reachy Mini or programmable LEGO robot and distract him from such tracks 🫣
November 27, 2025 at 5:32 PM
I wish I could come join 😭
November 27, 2025 at 5:27 PM
Because it's a different topic

We are not seeing workshop papers (say NeuriPS workshop papers) as prestigious as such a business is advertising as "top AI conference papers"

However, highschool/undergraduate students and potentially their parents / school teachers would no think that way
November 27, 2025 at 4:04 PM
I don't understand the question. Could you clarify/ rephrase?
November 27, 2025 at 3:48 AM
I'm not going back to the Econ discussion
November 27, 2025 at 3:42 AM
I am also concerned that the next level of unreasonable expectations on high schoolers and undergrads like "they have NeurIPS (workshop) papers" would be "and their papers were already cited X times, including those from such cool institutions"

Don't make it happen, please!
November 25, 2025 at 8:28 PM
A bigger concern that I have is about accountability of such submissions / publications

Who's gonna take responsibility for the contents and wrongdoing if found?

Don't tell me that they'd throw high school / undergrad students under bus or "come on, those are just workshops"
November 25, 2025 at 8:28 PM
It would be great if they could prepare check forms for detected reviews in OpenReview. It should not be that difficult

Then, PCs can monitor how many of the suspicious reviews were checked by ACs and share the report during the conference or a follow-up blog
November 21, 2025 at 9:37 AM
Thank you for sharing the info

That indeed matches the language used in their blog post

It's understandable that they want to rely on ACs for this, but I have a concern that there may be many (not the majority, I'd say) ACs who don't do the job
November 21, 2025 at 9:35 AM
Is the information detailed enough or just a list of suspicious review IDs ?
November 21, 2025 at 7:58 AM
Not sure if it is true, but they mentioned that someone impersonated the authors and posted the public comments

If it's true, I guess PCs deleted the comment for the real authors?
November 14, 2025 at 11:54 PM
As you can see in my original post, I am talking about LM API providers

I believe that uploading confidential information to their servers e.g. Google, OpenAI, Anthropic, etc is NOT ok

If the review platform offers a dedicated API and addresses the issue, it is a different story
November 14, 2025 at 5:55 PM
I haven't seen any "good LLM reviews" yet and cannot agree

Besides, if we heavily rely on "frontier LLM" for reviews, just uploading papers to arXiv with the auto-generated review comment should be enough
November 14, 2025 at 5:51 PM
Collected feedback may make it better, if the next AAAI PCs decide to take over the task and keep working on it

Yet, it still doesn't address this issue
> reviewers read papers that are not supposed to be disclosed to third-party
November 14, 2025 at 5:47 PM