William Gunn
metasynthesis.net
William Gunn
@metasynthesis.net
One thing I appreciate about social media is that people who have a bee in their bonnet about one specific niche thing can reach my attention. See also @andymasley.bsky.social on the water usage of AI.
November 10, 2025 at 3:23 AM
Reposted by William Gunn
Very well done story on 60 Minutes about the harms of grant terminations and freezes at Harvard with Joan Brugge, Don Engbar, David Liu, and a very compelling young cancer patient, now cured with Liu's technology.

Transcript and video here

www.cbsnews.com/news/researc...
Battle between Trump and universities hurting scientific research in need of federal funding
Federal research funding to universities has fueled breakthroughs for years. The White House is pressuring universities to align with the president's political agenda, or risk losing their funding.
www.cbsnews.com
November 10, 2025 at 1:03 AM
Choose carefully what you optimize for.
November 9, 2025 at 11:42 PM
That Watson was a disagreeable guy should make you more dubious of stories that minimize his contribution, not less.
November 9, 2025 at 11:36 PM
Reposted by William Gunn
This is probably the most literate discussion of alignment problems across crypto and AI protocols you’ll find anywhere that’s still generalist accessible. Emmett Shear and Alex Stokes really went at it with some high-level and sophisticated sparring. www.youtube.com/watch?v=zHGg...
Bridge Atlas - Episode 4: Alignment | Emmett Shear & Alex Stokes
YouTube video by Protocol Town Hall
www.youtube.com
November 8, 2025 at 9:55 PM
The concept of cached assumptions is good to have in your mental toolbox.
this really captures the specific set of cached/unconfronted assumptions that underlie a lot of discourse, especially here (from @andymasley.bsky.social)
November 7, 2025 at 9:43 PM
This is a neat series of posts.
With 2508 citations in 127 years, Student (1908) introducing the t-test wins for huge actual impact with moderate citation impact: www.jstor.org/stable/23315...

Now, how about huge actual impact and minimal citation impact?
November 7, 2025 at 5:14 PM
Reposted by William Gunn
We’ve been heads down @ArcadiaScience for a bit but the @arenabioworks news this week caused me to dump thoughts. hard things are hard; don’t be such a fucking hater. Reflections on parallels w our own institutional experiment here seemay.substack.com/p/big-experi...
Big experiments are only big if they can fail
Some reflections on Arena Bioworks' unexpected wind down as a fellow institutional experimentalist
seemay.substack.com
November 6, 2025 at 4:20 PM
Provenance! How do we know who runs a website? Digital signatures. Content can be signed too. Doesn't mean unsigned content can't be created, but browsers signal signatures via padlock and the same thing can happen for content.
The number one sign you're watching an AI video www.bbc.com/future/article… #AI #videos #context
November 7, 2025 at 2:12 AM
Yes, liability is the obvious approach here. It's how mature industries open up opportunities for innovation & reduce the need for regulation.
We need innovative technical and societal solutions to mitigate AI risks. I believe liability insurance for AI developers could be an excellent market-based incentive to drive safety standards and accountability, and is an option worth considering.
www.ft.com/content/181f...
Force AI firms to buy nuclear-style insurance, says Yoshua Bengio
Turing Prize winner urges governments to require tech groups to cover catastrophic outcomes and fund safety research
www.ft.com
November 7, 2025 at 2:04 AM
Reposted by William Gunn
Reposted by William Gunn
"Even with notable gains in fluency, GPT-5 is still prone to hallucinate, break rules, and exhibit latent capabilities that raise safety and biosecurity concerns" www.nature.com/articles/s41...
The fragile intelligence of GPT-5 in medicine
Nature Medicine - The latest large language model from OpenAI offers safety gains, persistent risks and the illusion of understanding.
www.nature.com
November 6, 2025 at 2:13 PM
Reposted by William Gunn
Great post this week from @lisalibrarian.bsky.social that hopes to clear up some of the confusions around Creative Commons licenses and the use of such content for AI training. This speaks to the ongoing failure of the publishing and OA communities to make clear just what these licenses mean
Can a CC License Constrain Fair Use or Other Copyright Limitations or Exemptions? - The Scholarly Kitchen
Creative Commons (CC) licenses expand, not restrict, the permissible uses of copyrighted works.
scholarlykitchen.sspnet.org
November 6, 2025 at 1:04 PM
Reposted by William Gunn
When @fermatslibrary.bsky.social brought up this 1940 article about why we have nothing to worry about from nuclear chain reactions, I checked that it was real and not a modern forgery. Because it seems almost too good to be true in light of current AI safety talk.
November 5, 2025 at 9:53 AM
🤯
I've been enjoying learning about linear regression. This is a really cool machine learning technique with some really elegant theory --- someone should have taught me about this earlier!
November 5, 2025 at 2:46 AM
Reposted by William Gunn
A network of peer reviewers in Italy is targeting medical journals, threatening “both the scientific record and patient safety,” a team of researchers including @deevybee.bsky.social report.
Review mill in Italy targeting ob-gyn journals, researchers allege
Examples of “boilerplate” text used in the suspect reviews.M.A. Oviedo-Garcia et al/medRxiv 2025 A network of peer reviewers in Italy is targeting medical journals, threatening “both the scientific…
retractionwatch.com
November 4, 2025 at 8:54 PM
Reposted by William Gunn
I’m going to run a conference called “HolyShitCon🤯 2026” to talk about AI developments. Who wants to come?
November 4, 2025 at 8:19 PM
It's fun to try, but, like with spam, we need several systematic interventions: blacklisting producers & statistical techniques will only do so much. Ultimately we'll need to not trust by default - anything that wasn't produced by a member of a chain of trust is suspect.
PSA: Please stop giving advice on spotting the hallmarks of A.I.-generated videos.

Even the pros are are having a hard time. We can't OSINT sleuth our way out of this problem. wapo.st/4ljYzJy
Analysis | How to spot an AI video? LOL, you can’t.
It’s time to stop trying to be AI detectives. We need a different approach to a world awash in realistic AI.
wapo.st
November 5, 2025 at 2:29 AM
So important given what this administration is doing to US Agencies.
The John D. and Catherine T. MacArthur Foundation has generously awarded us funding to secure our own storage. This critical processing space will be instrumental in ensuring that large datasets can be temporarily stored, curated, and described.

Thank you, MacArthur Foundation, for your support!
Data Rescue Projects receives support from the John D. and Catherine T. MacArthur Foundation to support data rescue efforts
FOR IMMEDIATE RELEASE Since launching in February 2025, the Data Rescue Project has grown substantially. At this point, the DRP has enabled the rescue of more than 1,000 datasets from US Federal…
www.datarescueproject.org
November 5, 2025 at 2:17 AM
Reposted by William Gunn
Announcing our Frontier Data Centers Hub!

The world is about to see multiple 1 GW+ AI data centers.

We mapped their construction using satellite imagery, permits & public sources — releasing everything for free, including commissioned satellite images.

Highlights in thread!
November 4, 2025 at 7:16 PM
Reposted by William Gunn
Europe has a chance to shape a safer and more values-aligned future for AI innovation, and needs to. This was my main message at the AI in Science Summit in Copenhagen today.
I also presented Scientist AI, LawZero's approach to create technical guardrails and help accelerate scientific discovery.
November 3, 2025 at 5:22 PM
🤯
Introducing CorText: a framework that fuses brain data directly into a large language model, allowing for interactive neural readout using natural language.

tl;dr: you can now chat with a brain scan 🧠💬

1/n
November 4, 2025 at 3:13 AM
Reposted by William Gunn
𝗧𝗵𝗲 𝗘𝗨 𝗔𝗜 𝗔𝗰𝘁 𝗡𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿 #𝟴𝟵: 𝗔𝗜 𝗦𝘁𝗮𝗻𝗱𝗮𝗿𝗱𝘀 𝗔𝗰𝗰𝗲𝗹𝗲𝗿𝗮𝘁𝗶𝗼𝗻 𝗨𝗽𝗱𝗮𝘁𝗲𝘀 is live! CEN and CENELEC have announced exceptional measures to speed up the development of European standards supporting the AI Act.
The EU AI Act Newsletter #89: AI Standards Acceleration Updates
CEN and CENELEC have announced exceptional measures to speed up the development of European standards supporting the AI Act.
artificialintelligenceact.substack.com
November 3, 2025 at 1:24 PM
Reposted by William Gunn
AT service operators and moderation thinkers: I put together an early proposal around infra abuse notices across organizational boundaries.

really looking for feedback on this one, it is bait for counter-proposals and references to prior work!
github.com
November 2, 2025 at 7:41 PM
I'm putting a sign on my stove for Known Hot November.
Unfortunately @tangled.org will be down this month in honor of no knot november
starting on my research for known hut november
November 2, 2025 at 3:54 PM