Stanford HAI
@stanfordhai.bsky.social
The official account of the Stanford Institute for Human-Centered AI, advancing AI research, education, policy, and practice to improve the human condition.
🎥 Missed this year's Hoffman-Yee Symposium? You can now watch all sessions of the research presentations on our YouTube channel. Dive back into the rich and thought-provoking discussions here: www.youtube.com/playlist?lis...
October 30, 2025 at 9:06 PM
🎥 Missed this year's Hoffman-Yee Symposium? You can now watch all sessions of the research presentations on our YouTube channel. Dive back into the rich and thought-provoking discussions here: www.youtube.com/playlist?lis...
Join us tomorrow for a @stanfordhai.bsky.social seminar with Google's CTO @blaiseaguera.bsky.social. He'll challenge the myth of AI as alien intelligence and reframe it as a social phenomenon born from language—the "DNA" of collective human intelligence. hai.stanford.edu/events/blais...
October 28, 2025 at 10:44 PM
Join us tomorrow for a @stanfordhai.bsky.social seminar with Google's CTO @blaiseaguera.bsky.social. He'll challenge the myth of AI as alien intelligence and reframe it as a social phenomenon born from language—the "DNA" of collective human intelligence. hai.stanford.edu/events/blais...
Millions of kids need speech therapy, but there aren't enough clinicians to help them. Can AI fill the gap? New Stanford research shows top language models aren't ready yet—but fine-tuning could change that. hai.stanford.edu/news/using-a...
Using AI to Streamline Speech and Language Services for Children | Stanford HAI
Stanford researchers show that although top language models cannot yet accurately diagnose children’s speech disorders, fine-tuning and other approaches could well change the game.
hai.stanford.edu
October 28, 2025 at 5:19 PM
Millions of kids need speech therapy, but there aren't enough clinicians to help them. Can AI fill the gap? New Stanford research shows top language models aren't ready yet—but fine-tuning could change that. hai.stanford.edu/news/using-a...
What are some of the biggest challenges of new products like AI browsers? HAI Co-Director @jlanday.bsky.social says it’s difficult to anticipate a dominant form of interface, since people will likely want to interact with the technology in multiple ways. www.fastcompany.com/91427104/ope...
New AI browsers could usher in a web where agents do our bidding—eventually
OpenAI, Google—and probably others—will engage in a battle that could fundamentally change the way we use the web.
www.fastcompany.com
October 27, 2025 at 11:22 PM
What are some of the biggest challenges of new products like AI browsers? HAI Co-Director @jlanday.bsky.social says it’s difficult to anticipate a dominant form of interface, since people will likely want to interact with the technology in multiple ways. www.fastcompany.com/91427104/ope...
HAI Senior Fellow @erikbryn.bsky.social speaks about a new research examining the need for new, objective measurements of labor markets in the modern digital economy via @wsj.com: www.wsj.com/economy/jobs...
It’s Jobs Friday Without a Jobs Number: Here’s Where to Look for Alternatives
The monthly government jobs numbers didn’t arrive on time. But private firms are helping fill the gap.
www.wsj.com
October 24, 2025 at 7:23 PM
HAI Senior Fellow @erikbryn.bsky.social speaks about a new research examining the need for new, objective measurements of labor markets in the modern digital economy via @wsj.com: www.wsj.com/economy/jobs...
How can researchers continue to access public web data in the face of new threats like robots.txt exclusions, legal demands, & bot defenses? Join the Common Crawl Foundation today for a seminar covering their latest insights and ideas for the future of the open web: hai.stanford.edu/events/commo...
Common Crawl Foundation | Preserving Humanity's Knowledge and Making it Accessible: Addressing Challenges of Public Web Data | Stanford HAI
Learn about Common Crawl's insights from a recent data product and informed solutions for the future of public web data.
hai.stanford.edu
October 22, 2025 at 5:38 PM
How can researchers continue to access public web data in the face of new threats like robots.txt exclusions, legal demands, & bot defenses? Join the Common Crawl Foundation today for a seminar covering their latest insights and ideas for the future of the open web: hai.stanford.edu/events/commo...
A Stanford study reveals that leading AI companies are pulling user conversations for training. Should users of AI chatbots worry about their privacy? hai.stanford.edu/news/be-care...
Be Careful What You Tell Your AI Chatbot | Stanford HAI
A Stanford study reveals that leading AI companies are pulling user conversations for training, highlighting privacy risks and a need for clearer policies.
hai.stanford.edu
October 21, 2025 at 10:04 PM
A Stanford study reveals that leading AI companies are pulling user conversations for training. Should users of AI chatbots worry about their privacy? hai.stanford.edu/news/be-care...
ICYMI: At a recent seminar, AI pioneers Lucy Suchman and Terry Winograd discussed AI's past and present, exploring the lessons generative frictions offer as we navigate the challenges and possibilities of AI. Watch the recording here: hai.stanford.edu/events/gener...
October 20, 2025 at 10:21 PM
ICYMI: At a recent seminar, AI pioneers Lucy Suchman and Terry Winograd discussed AI's past and present, exploring the lessons generative frictions offer as we navigate the challenges and possibilities of AI. Watch the recording here: hai.stanford.edu/events/gener...
Ever wonder why using some devices feels effortless while others frustrate us? Join Prof. @bradamyers.bsky.social as he reveals the invisible design decisions behind interaction techniques at our next @stanfordhai.bsky.social seminar: hai.stanford.edu/events/brad-...
Brad Myers | Pick, Click, and Flick: Stories About Interaction Techniques | Stanford HAI
This talk will explain what interaction techniques are, why they are important and difficult to design and implement, and the history and future of a few interesting examples.
hai.stanford.edu
October 20, 2025 at 8:58 PM
Ever wonder why using some devices feels effortless while others frustrate us? Join Prof. @bradamyers.bsky.social as he reveals the invisible design decisions behind interaction techniques at our next @stanfordhai.bsky.social seminar: hai.stanford.edu/events/brad-...
📸 This month, HAI Co-Director @jlanday.bsky.social traveled to Asia to advance critical conversations on human-centered AI. At the 2025 STS Forum and in discussions with Korea's foreign minister, he emphasized the urgent need to design AI systems at the user, community, and society levels.
October 15, 2025 at 5:33 PM
📸 This month, HAI Co-Director @jlanday.bsky.social traveled to Asia to advance critical conversations on human-centered AI. At the 2025 STS Forum and in discussions with Korea's foreign minister, he emphasized the urgent need to design AI systems at the user, community, and society levels.
Can AI generate new DNA and show us how our genomes interact at a molecular level? The results could reveal novel insights in biology and pave the way for personalized medicine. Meet the scholars behind EVO 2 at the Hoffman-Yee Symposium on Oct. 14. hai.stanford.edu/events/hoffm...
October 3, 2025 at 3:12 PM
Can AI generate new DNA and show us how our genomes interact at a molecular level? The results could reveal novel insights in biology and pave the way for personalized medicine. Meet the scholars behind EVO 2 at the Hoffman-Yee Symposium on Oct. 14. hai.stanford.edu/events/hoffm...
What if an AI model could predict your risk and progression of Alzheimer’s or Parkinson's? Scholars are building a world model of the brain that could create better predictions for diagnosis and care. Learn more at the Hoffman-Yee Symposium on Oct. 14: hai.stanford.edu/events/hoffm...
October 1, 2025 at 6:17 PM
What if an AI model could predict your risk and progression of Alzheimer’s or Parkinson's? Scholars are building a world model of the brain that could create better predictions for diagnosis and care. Learn more at the Hoffman-Yee Symposium on Oct. 14: hai.stanford.edu/events/hoffm...
In collaboration with @stanforddata.bsky.social, we kicked off our fall seminar series with HAI faculty affiliate @brianhie.bsky.social. He presented Evo 2, an open-source tool that can predict the form and function of proteins in the DNA of all domains of life. 🧬 hai.stanford.edu/events/brian...
September 30, 2025 at 8:11 PM
In collaboration with @stanforddata.bsky.social, we kicked off our fall seminar series with HAI faculty affiliate @brianhie.bsky.social. He presented Evo 2, an open-source tool that can predict the form and function of proteins in the DNA of all domains of life. 🧬 hai.stanford.edu/events/brian...
📸 Early-career workers in AI-exposed roles faced a 13% drop in employment after generative AI adoption, according to research by Bharat Chandar, Ruyu Chen, and @erikbryn.bsky.social presented at today's Digital Economy Lab seminar. Read the paper here: digitaleconomy.stanford.edu/publications...
September 29, 2025 at 9:08 PM
📸 Early-career workers in AI-exposed roles faced a 13% drop in employment after generative AI adoption, according to research by Bharat Chandar, Ruyu Chen, and @erikbryn.bsky.social presented at today's Digital Economy Lab seminar. Read the paper here: digitaleconomy.stanford.edu/publications...
“Generative AI models offered by major AI companies are used by tens of millions of people every day, and we should encourage them to make their models as safe as they possibly can,” said HAI Policy Fellow @riana.bsky.social via @techpolicypress.bsky.social: www.techpolicy.press/how-congress...
How Congress Could Stifle The Onslaught of AI-Generated Child Sexual Abuse Material | TechPolicy.Press
Cleaning training data might not be enough to hinder a model from creating CSAM, writes Jasmine Mithani.
www.techpolicy.press
September 26, 2025 at 3:55 PM
“Generative AI models offered by major AI companies are used by tens of millions of people every day, and we should encourage them to make their models as safe as they possibly can,” said HAI Policy Fellow @riana.bsky.social via @techpolicypress.bsky.social: www.techpolicy.press/how-congress...
📣 NEW: How can we validate claims about AI? Many AI companies often base their testing on specific tasks but overstate their overall capabilities. Our latest policy brief presents a three-step validation framework for separating legitimate from unsupported claims. hai.stanford.edu/policy/valid...
September 25, 2025 at 4:44 PM
📣 NEW: How can we validate claims about AI? Many AI companies often base their testing on specific tasks but overstate their overall capabilities. Our latest policy brief presents a three-step validation framework for separating legitimate from unsupported claims. hai.stanford.edu/policy/valid...
“When only a few have the resources to build and benefit from AI, we leave the rest of the world waiting at the door,” said
@stanfordhai.bsky.social Senior Fellow @yejinchoinka.bsky.social during her address to the UN Security Council. Read her full speech here: hai.stanford.edu/policy/yejin...
@stanfordhai.bsky.social Senior Fellow @yejinchoinka.bsky.social during her address to the UN Security Council. Read her full speech here: hai.stanford.edu/policy/yejin...
September 24, 2025 at 7:41 PM
“When only a few have the resources to build and benefit from AI, we leave the rest of the world waiting at the door,” said
@stanfordhai.bsky.social Senior Fellow @yejinchoinka.bsky.social during her address to the UN Security Council. Read her full speech here: hai.stanford.edu/policy/yejin...
@stanfordhai.bsky.social Senior Fellow @yejinchoinka.bsky.social during her address to the UN Security Council. Read her full speech here: hai.stanford.edu/policy/yejin...
How do educators decide whether to use AI tools or not? Stanford researchers gathered 60+ K-12 math educators nationwide to understand their AI needs and perspectives and to inform better design for ed tech tools.
Here are their findings: hai.stanford.edu/news/how-mat...
Here are their findings: hai.stanford.edu/news/how-mat...
How Math Teachers Are Making Decisions About Using AI | Stanford HAI
A Stanford summit explored how K-12 educators are selecting, adapting, and critiquing AI tools for effective learning.
hai.stanford.edu
September 23, 2025 at 5:40 PM
How do educators decide whether to use AI tools or not? Stanford researchers gathered 60+ K-12 math educators nationwide to understand their AI needs and perspectives and to inform better design for ed tech tools.
Here are their findings: hai.stanford.edu/news/how-mat...
Here are their findings: hai.stanford.edu/news/how-mat...
Reposted by Stanford HAI
AI is revolutionizing drug discovery and opening doors to novel treatments. I spoke with Jim Weatherall about how @AstraZeneca and @Stanford University School of Medicine are collaborating to blend the strengths of industry and academia. Tune in: www.science.org/content/webi...
AI meets medicine: How academic–industry alliances are accelerating drug discovery
www.science.org
September 5, 2025 at 7:53 PM
AI is revolutionizing drug discovery and opening doors to novel treatments. I spoke with Jim Weatherall about how @AstraZeneca and @Stanford University School of Medicine are collaborating to blend the strengths of industry and academia. Tune in: www.science.org/content/webi...
Many teachers are concerned about AI getting in the way of learning, but a far more dangerous trend is happening: kids using “undress” apps to create deepfake nudes of their peers. @riana.bsky.social studies the impact of AI-generated child sexual abuse: hai.stanford.edu/news/how-do-...
How Do We Protect Children in the Age of AI? | Stanford HAI
Tools that enable teens to create deepfake nude images of each other are compromising child safety, and parents must get involved.
hai.stanford.edu
September 15, 2025 at 4:48 PM
Many teachers are concerned about AI getting in the way of learning, but a far more dangerous trend is happening: kids using “undress” apps to create deepfake nudes of their peers. @riana.bsky.social studies the impact of AI-generated child sexual abuse: hai.stanford.edu/news/how-do-...
Can we achieve political neutrality in AI? Our latest brief argues that while true neutrality is not technically possible, there are ways to approximate it. We introduce a framework of 8 techniques for approximating political neutrality in AI models: hai.stanford.edu/policy/towar...
Toward Political Neutrality in AI | Stanford HAI
This brief introduces a framework of eight techniques for approximating political neutrality in AI models.
hai.stanford.edu
September 11, 2025 at 4:05 PM
Can we achieve political neutrality in AI? Our latest brief argues that while true neutrality is not technically possible, there are ways to approximate it. We introduce a framework of 8 techniques for approximating political neutrality in AI models: hai.stanford.edu/policy/towar...
HAI Senior Research Scholar and Policy Fellow Rishi Bommasani is working across disciplines to address complex questions around AI governance. Recently, he authored a Science paper, joining scholars in setting out a vision for evidence-based AI policy. hai.stanford.edu/news/fosteri...
Fostering Effective Policy for a Brave New AI World: A Conversation with Rishi Bommasani | Stanford HAI
The senior research scholar and policy fellow is working across disciplines to address complex questions around AI governance.
hai.stanford.edu
September 10, 2025 at 5:04 PM
HAI Senior Research Scholar and Policy Fellow Rishi Bommasani is working across disciplines to address complex questions around AI governance. Recently, he authored a Science paper, joining scholars in setting out a vision for evidence-based AI policy. hai.stanford.edu/news/fosteri...
📸 @stanfordhai.bsky.social experts at today's "The Next Revolution of AI: Impact Summit" urge us to guide AI’s future with reasoned optimism and resilience to benefit society and future generations. The event brings together top minds to discuss AI’s next wave in science, industry & beyond.
September 9, 2025 at 11:56 PM
📸 @stanfordhai.bsky.social experts at today's "The Next Revolution of AI: Impact Summit" urge us to guide AI’s future with reasoned optimism and resilience to benefit society and future generations. The event brings together top minds to discuss AI’s next wave in science, industry & beyond.
In this Stanford news article, @stanfordhai.bsky.social Senior Fellow @suryaganguli.bsky.social argues that universities are essential for understanding how AI works. Here, he outlines three reasons why: news.stanford.edu/stories/2025...
Three reasons why universities are crucial for understanding AI
There is a “fierce urgency” to understand how artificial intelligence works, says Stanford physicist Surya Ganguli, who is leading a project to bring the inner workings of AI to light through transpar...
news.stanford.edu
September 8, 2025 at 6:40 PM
In this Stanford news article, @stanfordhai.bsky.social Senior Fellow @suryaganguli.bsky.social argues that universities are essential for understanding how AI works. Here, he outlines three reasons why: news.stanford.edu/stories/2025...
Google DeepMind and @stanfordhai.bsky.social scholars @mavelous-mav.bsky.social and @mbernst.bsky.social invite academic researchers to enter the AI for Organizations Grand Challenge. Help us find the best ideas for shaping the future of collaboration in the workplace: hai.stanford.edu/aiogc
September 4, 2025 at 5:30 PM
Google DeepMind and @stanfordhai.bsky.social scholars @mavelous-mav.bsky.social and @mbernst.bsky.social invite academic researchers to enter the AI for Organizations Grand Challenge. Help us find the best ideas for shaping the future of collaboration in the workplace: hai.stanford.edu/aiogc