Fangcong Yin
fcyin.bsky.social
Fangcong Yin
@fcyin.bsky.social
CS PhD student @UT Austin studying NLP. Prev:@CornellCIS
Reposted by Fangcong Yin
What if you could understand and control an LLM by studying its *smaller* sibling?

Our new paper introduces the Linear Representation Transferability Hypothesis. We find that the internal representations of different-sized models can be translated into one another using a simple linear(affine) map.
July 10, 2025 at 5:26 PM
Reposted by Fangcong Yin
I'm at #Neurips2024 this week!

My work (arxiv.org/abs/2406.17692) w/ @gregdnlp.bsky.social & @eunsol.bsky.social exploring the connection between LLM alignment and response pluralism will be at pluralistic-alignment.github.io Saturday. Drop by to learn more!
December 11, 2024 at 5:39 PM
Interpretability can be used to improve LLM fine-tuning - check out our poster at #NeurIPS2024! Where: East Exhibit Hall A-C #3402 (Poster Session 2 East)
When: 11 Dec 4:30 - 7:30 pm PST (Vancouver time)
See you in Vancouver! Would love to chat about PEFT, interp, alignment, and more
December 9, 2024 at 10:22 PM