Zhengyang Shan
shanzzyy.bsky.social
Zhengyang Shan
@shanzzyy.bsky.social
PhD @ Boston University | Researching interpretability & evaluation in large language models
7/
Takeaway: fairness interventions must be mechanism-aware and task-specific.
With the right causal targets, we can do surgical debiasing while preserving general capabilities.

📃 Paper: arxiv.org/pdf/2512.20796
🙏 Amazing advisor: Aaron Mueller @amuuueller.bsky.social
January 20, 2026 at 9:01 PM
6/
Mechanistically, bias often doesn’t live in explicit demographic tokens. It instead hides in contextual proxies like formality, technical language, and “competence” cues. This explains why direct ablation methods can often fail.
January 20, 2026 at 9:00 PM
5/
We find that race, gender, and education shortcuts rely on different internal mechanisms, so no single debiasing method works universally.

In other words, there is no one-size-fits-all debiasing method!
January 20, 2026 at 9:00 PM
4/
We compare attribution-based (“output” features) and correlation-based (“input” features) steering in LLMs. This follows the input/output distinction of @danaarad.bsky.social and @boknilev.bsky.social: some representations detect concepts in inputs, while others predict concepts in outputs.
January 20, 2026 at 9:00 PM
3/
We study the use of demographic information where this info is:
• causally relevant (name → demographic),
• irrelevant (profession → demographic), or
• partially relevant (profession → education).

This lets us separate legitimate recognition from stereotyping.
January 20, 2026 at 8:59 PM
2/
We study implicit biases via a word association task: the model assigns demographic labels to names or professions (e.g., “engineer → ?”, “Jack → ?”).

Inspired by prior work on implicit associations in LLMs (e.g., Xuechunzi Bai et al., 2025).
January 20, 2026 at 8:58 PM
7/
Takeaway: fairness interventions must be mechanism-aware and task-specific.
With the right causal targets, we can do surgical debiasing while preserving general capabilities.

📃 Paper: arxiv.org/pdf/2512.20796
🙏 Amazing advisor: Aaron Mueller @amuuueller.bsky.social
arxiv.org
January 20, 2026 at 8:52 PM
6/
Mechanistically, bias often doesn’t live in explicit demographic tokens. It instead hides in contextual proxies like formality, technical language, and “competence” cues. This explains why direct ablation methods can often fail.
January 20, 2026 at 8:50 PM
5/
We find that race, gender, and education shortcuts rely on different internal mechanisms, so no single debiasing method works universally.

In other words, there is no one-size-fits-all debiasing method!
January 20, 2026 at 8:50 PM
4/
We compare attribution-based (“output” features) and correlation-based (“input” features) steering in LLMs. This follows the input/output distinction of @danaarad.bsky.social and @boknilev.bsky.social: some representations detect concepts in inputs, while others predict concepts in outputs.
January 20, 2026 at 8:50 PM
3/
We study the use of demographic information where this info is:
• causally relevant (name → demographic),
• irrelevant (profession → demographic), or
• partially relevant (profession → education).

This lets us separate legitimate recognition from stereotyping.
January 20, 2026 at 8:47 PM
2/
We study implicit biases via a word association task: the model assigns demographic labels to names or professions (e.g., “engineer → ?”, “Jack → ?”).

Inspired by prior work on implicit associations in LLMs (e.g., Xuechunzi Bai et al., 2025).
January 20, 2026 at 8:46 PM