Kyra Wilson
banner
kyrawilson.bsky.social
Kyra Wilson
@kyrawilson.bsky.social
PhD student at UW iSchool | ai fairness, evaluation, and decision-making | she/her 🥝

kyrawilson.github.io/me
But a positive note is that when people took an implicit association test (commonly used for anti-bias training) before doing the resume-screening task, they increased their selection of stereotype-incongruent candidates by 12.7% regardless of how biased the AI model they interacted with was.
October 21, 2025 at 11:39 AM
We showed people AI recommendations that had varying levels of racial bias and found that (at most) human oversight decreased bias in final outcomes by at most 15.2% which is still far from the outcome bias rates when no AI or unbiased AI was used.
October 21, 2025 at 11:39 AM
We also found that depictions of racial identities are getting more homogenized with successive releases of SD, reinforcing harmful ideas about what people with stigmatized identities "should" look like.
October 21, 2025 at 11:39 AM
We found that the newest model (SD XL) tends to generate images with darker skin tones compared to SD v1.5 and v2.1, but it still over-represents dark skin tones for stigmatized identities compared to non-stigmatized identities.
October 21, 2025 at 11:39 AM
We also find that 89.4% of papers don’t provide detailed information about real-world implementation of their findings. Based on this, we made a Fact Sheet that to guide researchers in communicating findings in ways that enable model developers or downstream users to implement them appropriately.
October 21, 2025 at 11:39 AM