Keyon Vafa
keyonv.bsky.social
Keyon Vafa
@keyonv.bsky.social
Postdoctoral fellow at Harvard Data Science Initiative | Former computer science PhD at Columbia University | ML + NLP + social sciences
https://keyonvafa.com
Reposted by Keyon Vafa
💡🤖🔥 @keyonv.bsky.social's talk at metrics-and-models.github.io was brilliant, posing epistemic questions about what Artificial Intelligence "understands".

Next (two weeks): Alexander Vezhnevets talks about a new multi-actor generative agent based model. As usual, *all welcome* #datascience #css💡🤖🔥
August 27, 2025 at 2:57 PM
Reposted by Keyon Vafa
💡🤖🔥The talk by Juan Carlos Perdomo at metrics-and-models.github.io was so thought provoking that the convenors stayed to discuss it in the room afterwards for quite some time!

Next, we have @keyonv.bsky.social asking: "What are AI's World Models?". Exciting times over here, all welcome!💡🤖🔥
August 14, 2025 at 9:50 AM
Can an AI model predict perfectly and still have a terrible world model?

What would that even mean?

Our new ICML paper (poster tomorrow!) formalizes these questions.

One result tells the story: A transformer trained on 10M solar systems nails planetary orbits. But it botches gravitational laws 🧵
July 14, 2025 at 1:50 PM
Reposted by Keyon Vafa
If we know someone’s career history, how well can we predict which jobs they’ll have next? Read our profile of @keyonv.bsky.social to learn how ML models can be used to predict workers’ career trajectories & better understand labor markets.

medium.com/@gsb_silab/k...
Keyon Vafa: Predicting Workers’ Career Trajectories to Better Understand Labor Markets
If we know someone’s career history, how well can we predict which job they’ll have next?
medium.com
June 30, 2025 at 3:39 PM
Reposted by Keyon Vafa
Foundation models make great predictions. How should we use them for estimation problems in social science?

New PNAS paper @susanathey.bsky.social & @keyonv.bsky.social & @Blei Lab:
Bad news: Good predictions ≠ good estimates.
Good news: Good estimates possible by fine-tuning models differently 🧵
June 30, 2025 at 12:16 PM
Reposted by Keyon Vafa
*Please repost* @sjgreenwood.bsky.social and I just launched a new personalized feed (*please pin*) that we hope will become a "must use" for #academicsky. The feed shows posts about papers filtered by *your* follower network. It's become my default Bluesky experience bsky.app/profile/pape...
March 10, 2025 at 6:14 PM
Reposted by Keyon Vafa
Happy to write this News & Views piece on the recent audit showing LLMs picking up "us versus them" biases: www.nature.com/articles/s43... (Read-only version: rdcu.be/d5ovo)

Check out the amazing (original) paper here: www.nature.com/articles/s43...
Large language models act as if they are part of a group - Nature Computational Science
An extensive audit of large language models reveals that numerous models mirror the ‘us versus them’ thinking seen in human behavior. These social prejudices are likely captured from the biased conten...
www.nature.com
January 2, 2025 at 2:11 PM
Reposted by Keyon Vafa
Applications open for the SICSS-ODISSEI summer school at Erasmus University. For PhD students, post-docs and early career researchers interested in computational social science. More info: odissei-data.nl/event/sicss-...
SICSS-ODISSEI Summer School 2025 - ODISSEI – Open Data Infrastructure for Social Science and Economic Innovations
From 16 to 27 June 2024, ODISSEI is hosting its fourth summer school at Erasmus University in Rotterdam, as part of the Summer Institutes in Computational Social Science (SICSS) and the Erasmus Gradua...
odissei-data.nl
December 18, 2024 at 4:12 PM
Reposted by Keyon Vafa
📢Announcing 1-day CHI 2025 workshop: Speech AI for All! We’ll discuss challenges & impacts of inclusive speech tech for people with speech diversities, connecting researchers, practitioners, policymakers, & community members. 🎉Apply to join us: speechai4all.org
December 16, 2024 at 7:45 PM
At NeurIPS today through Sunday!

Today I'll be presenting our spotlight paper on evaluating LLM world models at the 4:30pm poster session (#2301).

On Saturday I'll be co-organizing the Behavioral ML workshop. Hope to see you there!

Paper: arxiv.org/abs/2406.03689
Workshop: behavioralml.org
Evaluating the World Model Implicit in a Generative Model
Recent work suggests that large language models may implicitly learn world models. How should we assess this possibility? We formalize this question for the case where the underlying reality is govern...
arxiv.org
December 12, 2024 at 6:59 PM
Reposted by Keyon Vafa
I'm excited to use my first post here to introduce the first paper of my PhD, "User-item fairness tradeoffs in recommendations" (NeurIPS 2024)!

This is joint work with Sudalakshmee Chiniah and my advisor @nkgarg.bsky.social

Description/links below: 1/
December 11, 2024 at 5:22 AM
Reposted by Keyon Vafa
I am very excited to share our new Neurips 2024 paper + package, Treeffuser! 🌳 We combine gradient-boosted trees with diffusion models for fast, flexible probabilistic predictions and well-calibrated uncertainty.

paper: arxiv.org/abs/2406.07658
repo: github.com/blei-lab/tre...

🧵(1/8)
December 2, 2024 at 9:48 PM
Thank you Nature and @anilananth.bsky.social for this great feature on LLMs and AGI (and for highlighting our work arxiv.org/abs/2406.03689)
December 4, 2024 at 4:59 PM
hi
December 4, 2024 at 4:46 PM