and team on this important problem and very proud that
@apartresearch.bsky.social
was able to support this project!
We propose human agency as a new alignment target in HumanAgencyBench, made possible by AI simulation/evals. We find e.g., Claude most supports agency but also most tries to steer user values 👇 arxiv.org/abs/2509.08494
and team on this important problem and very proud that
@apartresearch.bsky.social
was able to support this project!
It has less than 1,000 views.
It has less than 1,000 views.