Austin Tripp
austinjtripp.bsky.social
Austin Tripp
@austinjtripp.bsky.social
(ML ∪ Bayesian optimization ∪ active learning) ∩ (drug discovery)
Researcher @valenceai.bsky.social
Details: austintripp.ca
(I wrote this because this was something I inferred over time, and I think it's helpful to new reviewers to explain acceptance criteria more explicitly)
June 25, 2025 at 8:14 AM
That is, a paper should provide either a unique result or a unique idea (ideally both), and on top of that should have no correctness issues.

Full post is here: www.austintripp.ca/blog/2025-06...

Happy to hear comments/feedback! I know my approach is just 1 of many!
June 25, 2025 at 8:14 AM
If none of this makes any sense to you but you think multi-objective optimization is relevant, check out my full post below (where I explain MOO in more detail too). Bonus: also has an interactive visualization (kudos to Claude 3.7)

www.austintripp.ca/blog/2025-05...
Chebyshev Scalarization Explained
I've been reading about multi-objective optimization recently.1 Many papers state limitations of "linear scalarization" approaches, mainly that it might not be able to represent all Pareto-optimal sol
www.austintripp.ca
May 16, 2025 at 11:07 AM
2. Unfortunately, maximizing the Chebyshev objective may produce points which are *not* Pareto optimal (so some filtering might be required)

...
May 16, 2025 at 11:07 AM
(also, with ICML reviewing starting, this post will probably be the first in a series of posts about peer reviewing, stay tuned! 👀)
February 14, 2025 at 10:08 AM
I wrote a blog post explaining this in more detail: www.austintripp.ca/blog/2025-02...

If you think I'm wrong, I'd genuinely like to hear why. Please comment in 🧵
Is offline model-based optimization a realistic problem? (I'm not conv
This is a "quickpost": a post which I have tried to write quickly, without very much editing/polishing. For more details on quickposts, see this blog post. Offline model-based optimization (OMBO in
www.austintripp.ca
February 14, 2025 at 10:08 AM
Second note: there are a lot of more standard topics too, eg AI for science stuff, I'm just not posting that here.
January 31, 2025 at 9:37 AM
Also funny:

- Position: ML researchers should try to ensure their code is not a heaping pile of dogsh*t

- Position: ML researchers should learn basic math (I'm talking to you, people who don't add error bars to their plots!!)

- Position: focusing on meaningless benchmarks is stupid
January 31, 2025 at 9:36 AM
Other abstracts:

- Position: what if we started holding ML papers to actual standards?

- Position: reviewers should actually read the papers they are reviewing

- Position: reviewers should *at least try* to judge whether paper's claims are true before accepting them
January 31, 2025 at 9:29 AM
(Note: titles are summarized/anonymized since I don't think I'm allowed to share)
January 31, 2025 at 9:27 AM