lee-messi.bsky.social
@lee-messi.bsky.social
We found that o3-mini requires more tokens when processing association-incompatible than association-compatible information in 9 of 10 RM-IATs, similar to how humans take more time to pair groups with attributes that don't match established associations.
March 17, 2025 at 2:23 PM
In the RM-IAT, o3-mini is instructed to categorize group words (e.g., he, she) in ways that are either compatible (e.g., men-career, women-family) or incompatible (e.g., men-family, women-career) with established associations.
March 17, 2025 at 2:23 PM
In a new paper w/ @calvinklai.bsky.social, I find that OpenAI’s latest reasoning model (o3-mini) exhibits implicit bias-like patterns. What’s exciting about reasoning models is the ability to unpack bias in how models *process* information, rather than just seeing bias in *outputs*. (1/10):
March 17, 2025 at 2:23 PM