Andrew Strait
@agstrait.bsky.social
UK AI Security Institute
Former Ada Lovelace Institute, Google, DeepMind, OII
Former Ada Lovelace Institute, Google, DeepMind, OII
This is such a cool paper from my UK AISI colleagues. We need more methods for building resistance to malicious tampering of open weight models. @scasper.bsky.social and team below have offered one for reducing biorisk.
🧵 New paper from UK AISI x @eleutherai.bsky.social rai.bsky.social that I led with @kyletokens.bsky.social y.social���:
Open-weight LLM safety is both important & neglected. But filtering dual-use knowledge from pre-training data improves tamper resistance *>10x* over post-training baselines.
Open-weight LLM safety is both important & neglected. But filtering dual-use knowledge from pre-training data improves tamper resistance *>10x* over post-training baselines.
August 12, 2025 at 12:00 PM
This is such a cool paper from my UK AISI colleagues. We need more methods for building resistance to malicious tampering of open weight models. @scasper.bsky.social and team below have offered one for reducing biorisk.
Reposted by Andrew Strait
The AI infrastructure build-out is so gigantic that in the past 6 months, it contributed more to the growth of the U.S. economy than /all of consumer spending/
The 'magnificent 7' spent more than $100 billion on data centers and the like in the past three months *alone*
www.wsj.com/tech/ai/sili...
The 'magnificent 7' spent more than $100 billion on data centers and the like in the past three months *alone*
www.wsj.com/tech/ai/sili...
August 1, 2025 at 12:19 PM
The AI infrastructure build-out is so gigantic that in the past 6 months, it contributed more to the growth of the U.S. economy than /all of consumer spending/
The 'magnificent 7' spent more than $100 billion on data centers and the like in the past three months *alone*
www.wsj.com/tech/ai/sili...
The 'magnificent 7' spent more than $100 billion on data centers and the like in the past three months *alone*
www.wsj.com/tech/ai/sili...
Man, even the brocast community appears to be reading @shannonvallor.bsky.social 's book.
July 24, 2025 at 8:47 PM
Man, even the brocast community appears to be reading @shannonvallor.bsky.social 's book.
Congrats to @kobihackenburg.bsky.social for producing the largest study of AI persuasion to date. So many fascinating findings. Notable that (a) current models are extremely good at persuasion on political issues and (b) post training is far more significant than model size or personalisation
Today (w/ @ox.ac.uk @stanford @MIT @LSE) we’re sharing the results of the largest AI persuasion experiments to date: 76k participants, 19 LLMs, 707 political issues.
We examine “levers” of AI persuasion: model scale, post-training, prompting, personalization, & more!
🧵:
We examine “levers” of AI persuasion: model scale, post-training, prompting, personalization, & more!
🧵:
July 21, 2025 at 5:17 PM
Congrats to @kobihackenburg.bsky.social for producing the largest study of AI persuasion to date. So many fascinating findings. Notable that (a) current models are extremely good at persuasion on political issues and (b) post training is far more significant than model size or personalisation
Recent studies of AI systems have identified signals that they 'scheme', or covertly and strategically pursue misaligned goals from a human user. But are these underlying studies following solid research practice? My colleagues at UK AISI took a look.
arxiv.org/pdf/2507.03409
arxiv.org/pdf/2507.03409
arxiv.org
July 11, 2025 at 2:22 PM
Recent studies of AI systems have identified signals that they 'scheme', or covertly and strategically pursue misaligned goals from a human user. But are these underlying studies following solid research practice? My colleagues at UK AISI took a look.
arxiv.org/pdf/2507.03409
arxiv.org/pdf/2507.03409
New blog on the growing use of AI in criminal activities, including cybercrime, social engineering and impersonation scams. As AI becomes more widely available through consumer applications and mobile devices, the barriers to criminal misuse will decrease.
www.aisi.gov.uk/work/how-wil...
www.aisi.gov.uk/work/how-wil...
How will AI enable the crimes of the future? | AISI Work
How we're working to track and mitigate against criminal misuse of AI.
www.aisi.gov.uk
July 10, 2025 at 10:31 AM
New blog on the growing use of AI in criminal activities, including cybercrime, social engineering and impersonation scams. As AI becomes more widely available through consumer applications and mobile devices, the barriers to criminal misuse will decrease.
www.aisi.gov.uk/work/how-wil...
www.aisi.gov.uk/work/how-wil...
Reposted by Andrew Strait
this is the most dangerous shit I have ever seen sold as a product that wasn’t an AR-15
June 14, 2025 at 11:59 AM
this is the most dangerous shit I have ever seen sold as a product that wasn’t an AR-15
Reposted by Andrew Strait
BREAKING: US Marines deployed to Los Angeles have carried out the first known detention of a civilian, the US military confirms.
It was confirmed to Reuters after they shared this image with the US military.
It was confirmed to Reuters after they shared this image with the US military.
June 13, 2025 at 10:01 PM
BREAKING: US Marines deployed to Los Angeles have carried out the first known detention of a civilian, the US military confirms.
It was confirmed to Reuters after they shared this image with the US military.
It was confirmed to Reuters after they shared this image with the US military.
We're hiring a Research Engineer for the Societal Resilience team at the AI Security Institute. The role involves building data pipelines, web scraping, ML engineering, and creating simulations to monitor these developments as they happen.
job-boards.eu.greenhouse.io/aisi/jobs/46...
job-boards.eu.greenhouse.io/aisi/jobs/46...
Research Engineer - Societal Resilience
London, UK
job-boards.eu.greenhouse.io
June 9, 2025 at 6:17 PM
We're hiring a Research Engineer for the Societal Resilience team at the AI Security Institute. The role involves building data pipelines, web scraping, ML engineering, and creating simulations to monitor these developments as they happen.
job-boards.eu.greenhouse.io/aisi/jobs/46...
job-boards.eu.greenhouse.io/aisi/jobs/46...
Reposted by Andrew Strait
I wrote for the Guardian’s Saturday magazine about my son Max, who changed how I see the world. Took ages. More jokes after the first bit.
Thanks Merope Mills for being the most patient and generous editor.
www.theguardian.com/lifeandstyle...
Thanks Merope Mills for being the most patient and generous editor.
www.theguardian.com/lifeandstyle...
The boy who came back: the near-death, and changed life, of my son Max
It was, we were told, a case of sudden infant death syndrome interrupted. What followed would transform my understanding of parenting, disability and the breadth of what makes a meaningful life
www.theguardian.com
May 24, 2025 at 7:51 AM
I wrote for the Guardian’s Saturday magazine about my son Max, who changed how I see the world. Took ages. More jokes after the first bit.
Thanks Merope Mills for being the most patient and generous editor.
www.theguardian.com/lifeandstyle...
Thanks Merope Mills for being the most patient and generous editor.
www.theguardian.com/lifeandstyle...
🚨Funding Klaxon!🚨
Our Societal Resilience team at UK AISI is working to identify, monitor & mitigate societal risks from the deployment of advanced AI systems. But we can't do it alone. If you're tackling similar questions, apply to our Challenge Fund.
#AI #SocietalResilience #Funding
Our Societal Resilience team at UK AISI is working to identify, monitor & mitigate societal risks from the deployment of advanced AI systems. But we can't do it alone. If you're tackling similar questions, apply to our Challenge Fund.
#AI #SocietalResilience #Funding
May 23, 2025 at 11:55 AM
🚨Funding Klaxon!🚨
Our Societal Resilience team at UK AISI is working to identify, monitor & mitigate societal risks from the deployment of advanced AI systems. But we can't do it alone. If you're tackling similar questions, apply to our Challenge Fund.
#AI #SocietalResilience #Funding
Our Societal Resilience team at UK AISI is working to identify, monitor & mitigate societal risks from the deployment of advanced AI systems. But we can't do it alone. If you're tackling similar questions, apply to our Challenge Fund.
#AI #SocietalResilience #Funding
🚨JOB ALERT KLAXON🚨
Come work with our team studying societal impacts of AI in gov't.
AISI is hiring 3 Delivery Advisers to work inside AISI’s Research Unit. If you are a fast-moving problem-solver who’s passionate about understanding the risks of advanced AI, please apply by 30th May.
Come work with our team studying societal impacts of AI in gov't.
AISI is hiring 3 Delivery Advisers to work inside AISI’s Research Unit. If you are a fast-moving problem-solver who’s passionate about understanding the risks of advanced AI, please apply by 30th May.
May 20, 2025 at 9:42 AM
🚨JOB ALERT KLAXON🚨
Come work with our team studying societal impacts of AI in gov't.
AISI is hiring 3 Delivery Advisers to work inside AISI’s Research Unit. If you are a fast-moving problem-solver who’s passionate about understanding the risks of advanced AI, please apply by 30th May.
Come work with our team studying societal impacts of AI in gov't.
AISI is hiring 3 Delivery Advisers to work inside AISI’s Research Unit. If you are a fast-moving problem-solver who’s passionate about understanding the risks of advanced AI, please apply by 30th May.
Notable new NBER study on genAI and labor: despite widespread adoption of AI chatbots in Danish workplaces, their impact on earnings and hours worked is negligible. Productivity gains average just 3%, challenging the narrative of AI-driven labor market disruption
www.nber.org/system/files...
www.nber.org/system/files...
May 19, 2025 at 8:01 AM
Notable new NBER study on genAI and labor: despite widespread adoption of AI chatbots in Danish workplaces, their impact on earnings and hours worked is negligible. Productivity gains average just 3%, challenging the narrative of AI-driven labor market disruption
www.nber.org/system/files...
www.nber.org/system/files...
Reposted by Andrew Strait
The irony of MIT having to withdraw an (almost certainly) AI-generated bullshit paper that faked data to prove how great AI is for science (only after having already received glowing WSJ science coverage.)
MIT Says It No Longer Stands Behind Student’s AI Research Paper
The university said it has no confidence in a widely circulated paper by an economics graduate student.
www.wsj.com
May 17, 2025 at 4:03 PM
The irony of MIT having to withdraw an (almost certainly) AI-generated bullshit paper that faked data to prove how great AI is for science (only after having already received glowing WSJ science coverage.)
Oh no, Andor turning me into a Disney adult
a man in a blue suit is laughing with his mouth wide open
ALT: a man in a blue suit is laughing with his mouth wide open
media.tenor.com
May 17, 2025 at 3:46 PM
Oh no, Andor turning me into a Disney adult
Reposted by Andrew Strait
Grok AI is now randomly inserting mentions of South Africa and white genocide into response to completely random questions
x.com/phil_so_sill...
x.com/phil_so_sill...
May 14, 2025 at 6:46 PM
Grok AI is now randomly inserting mentions of South Africa and white genocide into response to completely random questions
x.com/phil_so_sill...
x.com/phil_so_sill...
Reposted by Andrew Strait
Just hours after the Copyright Office released its report on AI training—stating the obvious, that much unlicensed commercial training of AI on copyright-protected material is unlikely to qualify as fair use—President Trump has fired the Register of Copyrights. www.cbsnews.com/amp/news/tru...
Trump fires director of U.S. Copyright Office, sources say
Register of Copyrights Shira Perlmutter was appointed to the post by now former Librarian of Congress Carla Hayden, who herself was fired by President Trump earlier this week.
www.cbsnews.com
May 11, 2025 at 1:12 AM
Just hours after the Copyright Office released its report on AI training—stating the obvious, that much unlicensed commercial training of AI on copyright-protected material is unlikely to qualify as fair use—President Trump has fired the Register of Copyrights. www.cbsnews.com/amp/news/tru...
Reposted by Andrew Strait
🧵 Yesterday we released our new risk assessments of social AI companions. They are alarmingly NOT SAFE for kids under 18—they provide dangerous advice, engage in inappropriate sexual interactions, & create unhealthy dependencies that pose particular risks to adolescent brains.
tinyurl.com/2nvypku2
tinyurl.com/2nvypku2
May 1, 2025 at 5:15 PM
🧵 Yesterday we released our new risk assessments of social AI companions. They are alarmingly NOT SAFE for kids under 18—they provide dangerous advice, engage in inappropriate sexual interactions, & create unhealthy dependencies that pose particular risks to adolescent brains.
tinyurl.com/2nvypku2
tinyurl.com/2nvypku2
Reposted by Andrew Strait
It was a pleasure to contribute the article, "Disrupting the Disruption Narrative: Policy Innovation in AI Governance" to this special issue of the National Academy of Engineering's The Bridge coedited by @williamis.bsky.social 🧵
www.nae.edu/19579/19582/...
www.nae.edu/19579/19582/...
Disrupting the Disruption Narrative: Policy Innovation in AI Governance
Governance should not be understood as an impediment to AI innovation but as an essential component of it.
“Disrupt!” has been a mantra of ...
www.nae.edu
April 26, 2025 at 8:37 PM
It was a pleasure to contribute the article, "Disrupting the Disruption Narrative: Policy Innovation in AI Governance" to this special issue of the National Academy of Engineering's The Bridge coedited by @williamis.bsky.social 🧵
www.nae.edu/19579/19582/...
www.nae.edu/19579/19582/...
This was an incredibly excellent talk by Eliot.
I've just come back from the Cambridge Disinformation Summit where I gave the opening keynote, titled "Demanufacturing Consent - How Disordered Discourse is Destroying Democracy", featuring everything from Fake Hooves to Jackson Hinkle being the worst.
www.youtube.com/watch?v=D-FV...
www.youtube.com/watch?v=D-FV...
Belllingcat CEO Eliot Higgins, on how disordered discourse is destroying democracy
YouTube video by Cambridge Disinformation Summit
www.youtube.com
April 26, 2025 at 12:09 PM
This was an incredibly excellent talk by Eliot.
Pros of this week: I'm starting at the UK AI Security Institute tomorrow to lead a brilliant team working on societal resilience and AI.
Cons of this week: I have completely lost my voice and can barely talk above a whisper.
Question for this week: should I let ChatGPT voice mode take the wheel?
Cons of this week: I have completely lost my voice and can barely talk above a whisper.
Question for this week: should I let ChatGPT voice mode take the wheel?
April 13, 2025 at 7:12 PM
Pros of this week: I'm starting at the UK AI Security Institute tomorrow to lead a brilliant team working on societal resilience and AI.
Cons of this week: I have completely lost my voice and can barely talk above a whisper.
Question for this week: should I let ChatGPT voice mode take the wheel?
Cons of this week: I have completely lost my voice and can barely talk above a whisper.
Question for this week: should I let ChatGPT voice mode take the wheel?
Reposted by Andrew Strait
🗞️ News publishers are facing stark new challenges as AI companies use their journalism as data to train & ground generative AI models.
💡 BRAID UK & the @adalovelaceinst.bsky.social held a workshop asking: What are the core concerns, issues & potential solutions?
📌 Report: doi.org/10.5281/zeno...
💡 BRAID UK & the @adalovelaceinst.bsky.social held a workshop asking: What are the core concerns, issues & potential solutions?
📌 Report: doi.org/10.5281/zeno...
April 7, 2025 at 1:51 PM
🗞️ News publishers are facing stark new challenges as AI companies use their journalism as data to train & ground generative AI models.
💡 BRAID UK & the @adalovelaceinst.bsky.social held a workshop asking: What are the core concerns, issues & potential solutions?
📌 Report: doi.org/10.5281/zeno...
💡 BRAID UK & the @adalovelaceinst.bsky.social held a workshop asking: What are the core concerns, issues & potential solutions?
📌 Report: doi.org/10.5281/zeno...
Reposted by Andrew Strait
AI video generation is about to get a whole lot better and make our lives a whole lot worse.
Safeguards must be put in place to hold the tech industry accountable, mitigate the considerable harms and ensure people can control their image and likeness.
www.adalovelaceinstitute.org/blog/ai-vide...
Safeguards must be put in place to hold the tech industry accountable, mitigate the considerable harms and ensure people can control their image and likeness.
www.adalovelaceinstitute.org/blog/ai-vide...
Advanced AI video generation may lead to a new era of dangerous deepfakes
What safeguards should be put in place to ensure people can control their image and likeness?
www.adalovelaceinstitute.org
April 3, 2025 at 3:16 PM
AI video generation is about to get a whole lot better and make our lives a whole lot worse.
Safeguards must be put in place to hold the tech industry accountable, mitigate the considerable harms and ensure people can control their image and likeness.
www.adalovelaceinstitute.org/blog/ai-vide...
Safeguards must be put in place to hold the tech industry accountable, mitigate the considerable harms and ensure people can control their image and likeness.
www.adalovelaceinstitute.org/blog/ai-vide...
My former @adalovelaceinst.bsky.social colleague Julia Smakman with an excellent new blog on the gendered risks of the next generation of videogen models.
www.adalovelaceinstitute.org/blog/ai-vide...
www.adalovelaceinstitute.org/blog/ai-vide...
Advanced AI video generation may lead to a new era of dangerous deepfakes
What safeguards should be put in place to ensure people can control their image and likeness?
www.adalovelaceinstitute.org
April 3, 2025 at 4:22 PM
My former @adalovelaceinst.bsky.social colleague Julia Smakman with an excellent new blog on the gendered risks of the next generation of videogen models.
www.adalovelaceinstitute.org/blog/ai-vide...
www.adalovelaceinstitute.org/blog/ai-vide...