jxmnop.bsky.social
@jxmnop.bsky.social
Reposted
finally managed to sneak my dog into a paper: arxiv.org/abs/2502.04549
February 10, 2025 at 5:03 AM
Reposted
One of my grand interpretability goals is to improve human scientific understanding by analyzing scientific discovery models, but this is the most convincing case yet that we CAN learn from model interpretation: Chess grandmasters learned new play concepts from AlphaZero's internal representations.
Bridging the Human-AI Knowledge Gap: Concept Discovery and Transfer in AlphaZero
Artificial Intelligence (AI) systems have made remarkable progress, attaining super-human performance across various domains. This presents us with an opportunity to further human knowledge and improv...
arxiv.org
January 27, 2025 at 9:43 PM