Eric Wong
profericwong.bsky.social
Eric Wong
@profericwong.bsky.social
Assistant professor at University of Pennsylvania. Machine learning, optimization, robustness & interpretability.

Home page: https://www.cis.upenn.edu/~exwong/
Lab page: https://brachiolab.github.io/
Research blog: https://debugml.github.io/
What do certified guarantees look like in the age of large language models and long reasoning chains? Look for us at EMNLP to find out!
I'll be presenting our work "Probabilistic Soundness Guarantees in LLM Reasoning Chains" at EMNLP 2025

Today (Nov 5) Hall C 14:30-16:00 802-Main

Blog: debugml.github.io/ares
Paper: arxiv.org/abs/2507.12948
Code: github.com/fallcat/ares
November 4, 2025 at 11:05 PM
If you're at ICML, in about 15 minutes, Weiqiu & I will be at our poster on sum-of-parts models: for faithful attributions and cosmology discovery. Stop by to say hi!

East Exhibition Hall A-B #E-1208
Thu 17 Jul 11 a.m. - 1:30 p.m. PDT
debugml.github.io/sum-of-parts/

#ICML @youweiqiu.bsky.social
Sum-of-Parts Models: Faithful Attributions for Groups of Features
Overcoming fundamental barriers in feature attribution methods with grouped attributions
debugml.github.io
July 17, 2025 at 5:45 PM
LLM ignoring instructions? Make it listen with InstABoost.

✅ Simple: Steer your model in 5 lines of code

✅ Effective: Outperforms latent steering & prompt-only methods

✅ Grounded: Based on our mechanistic theory on rule-following (LogicBreaks)

Blog: debugml.github.io/instaboost
July 10, 2025 at 6:46 PM
Reposted by Eric Wong
🧠 Foundation models are reshaping reasoning. Do we still need specialized neuro-symbolic (NeSy) training, or can clever prompting now suffice?
Our new position paper argues the road to generalizable NeSy should be paved with foundation models.
🔗 arxiv.org/abs/2505.24874
(🧵1/9)
June 13, 2025 at 8:30 PM