Leonardo Cotta
banner
cottascience.bsky.social
Leonardo Cotta
@cottascience.bsky.social
scaling lawver @ EIT from BH🔺🇧🇷
http://cottascience.github.io
I’d add data/task understanding as a separate mid layer. Most papers I know break in the transition of high to mid.
August 12, 2025 at 7:36 PM
This is why I personally love TMLR. If it's correct and well-written let's publish. The interesting papers are the ones the community actively recognizes in their work, e.g. citing, extending, turning into products, etc. (independent process of publication).
July 30, 2025 at 11:43 PM
I agree with most of your thread, but classifying "uninteresting work" is quite hard nowadays. Papers became this "hype-seeking" game, where out of the 10 hyped papers of the month, at most 1 survives further investigation of the results. And even if we think we're immune to this, what is interest?
July 30, 2025 at 11:43 PM
We're at ICML, drop us a line if you're excited about this direction.

📄 Paper: arxiv.org/abs/2507.02083
💻 Code: github.com/h4duan/SciGym
🌍 Website: h4duan.github.io/scigym-bench...
🗂️ Dataset: huggingface.co/datasets/h4d...
July 16, 2025 at 8:17 PM
Also, I see ITCS more like a “out of the box”, “bold” idea or even new area, I don’t see the papers having simplicity as a goal, but just my experience.
June 30, 2025 at 12:48 AM
Mhm, I agree with the idealistic part, I certainly have seen the same. But I know quite a few papers that are aligned w the call, tbh this happens in any venue. I think the message and the openness to this kind of paper is important though
June 30, 2025 at 12:46 AM
this is not my area, but if you think of it in terms of a randomized algorithm (BPP,PP), the hard part is usually the generation, at least for the algorithms we tend to design. e.g. Schwartz-Zippel Lemma. (Although in theory you can have the "hard part" in verification for any problem)
June 14, 2025 at 4:17 PM
It takes 1 terrible paper for knowledgeable people to stop reading all your papers, this risk is often not accounted for
June 9, 2025 at 8:02 PM
Maybe check Cat s22, it gives you the basics, eg whatsapp+gps and nothing else
June 8, 2025 at 7:40 PM
it just sounds like "see you three times" ;) it's like some people named "Sinho" that is often confused with portuguese/brazilians; but from what I heard it's a variation of Singh (not sure though)
May 30, 2025 at 11:02 PM
One simple way to reason about this: treatment assignment guarantees you have the right P(T|X). Self-selection changes P(X), a different quantity. Looking at your IPW estimator you can see that changing P(X) will bias regardless of P(T|X).
April 18, 2025 at 3:08 PM
this general idea of using an external world/causal model given by a human and using the LM only for inference is really cool ---it's also the insight behind our work in NATURAL. Do you guys think it's possible to write a more general software for the interface DAG->LLM_inference->estimate?
April 12, 2025 at 6:27 PM
Oh gotcha. I think it’s just super cheesy to quote feynman at this point haha but it’s a good philosophy to embrace
February 20, 2025 at 1:14 AM
In what contexts do you think it’s misused? Just curious, I’m a big fan and might be overusing it 😅
February 20, 2025 at 1:11 AM
if you're feeling uninspired and getting nan's everywhere, you can give your codebase, describe the problem and ask for suggestions to try or debug. I think of it more as a debugger assistant than a code generator.
February 19, 2025 at 3:02 PM