mschoene.bsky.social
@mschoene.bsky.social
Thanks, will take a closer look!
April 17, 2025 at 4:51 AM
Self iterated models such as Deep Equilibrium Models or Universal Transformers would provide such feedback, right? Here is an example from our latest study showing higher state expressivity in Language models with self feedback: arxiv.org/abs/2502.07827
Implicit Language Models are RNNs: Balancing Parallelization and Expressivity
State-space models (SSMs) and transformers dominate the language modeling landscape. However, they are constrained to a lower computational complexity than classical recurrent neural networks (RNNs), ...
arxiv.org
April 16, 2025 at 1:43 PM
Yes agree! This is particularly unsatisfying if reviewers asked for additional data or had serious misconceptions… Btw, the 75% you observe is exactly the current status of my submission… still hoping to get at least one more response to our rebuttal.
April 7, 2025 at 7:22 AM
What about the Pythia models by eleuther.ai? AFAIK they’re not instruction tuned. even if the final checkpoints are instruction tuned, there are checkpoints on HF every few million tokens over the course of pretraining.
EleutherAI
eleuther.ai
April 6, 2025 at 8:37 PM
To take all author comments into account when changing the score, it would make sense to change scores only after the author reviewer discussion ends.
April 6, 2025 at 8:28 PM
Was this stated in the email communication with the reviewers? (I am not reviewing for ICML)
The Reviewer instructions on icml.cc appear quite ambiguous to me. E.g. does “to acknowledge” imply to change the score? My current interpretation is that this button is merely a nudge towards engagement.
2025 Conference
icml.cc
April 6, 2025 at 8:25 PM
Aren’t there a few more days left for the Author-Reviewer discussion?
April 5, 2025 at 7:12 PM
Reviews will be released for accepted papers, and authors of rejected papers can opt in to release their reviews (source: reviewer guidelines)
April 1, 2025 at 8:52 PM
We’re having similar question! On tables: LLMs did a good job for me converting between formats (latex table to markdown in openreview). But how about figure? Anonymous is easy, but not tracking reviewer IP appears impossible on the internet 😅 any suggestions for images in particular?
March 29, 2025 at 6:39 PM
Interesting. As a third year PhD student, and listed reciprocal reviewer, I was not even invited to serve as a reviewer at ICML 🤔
February 9, 2025 at 4:34 PM
Can only agree and add: do most of your reading on tablet. It’s a different level of comprehension.
February 2, 2025 at 9:09 PM
Reminds me of the hype around DeepSeek vs OpenAI
January 21, 2025 at 8:17 PM