Jon Stokes
banner
jonstokes.com
Jon Stokes
@jonstokes.com
Writer. Coder. Doomer Techno-Optimist. Cryptography Brother. θηριομάχης. http://return.life, http://opensourcedefense.org. Writing about AI/ML at http://jonstokes.com.
Twitter is busted so I’m checking in here for a minute so hi!
July 2, 2023 at 4:13 AM
Guys I just loaded up this app for the first time in a week & already I’m lost. What is a “skeet”?
April 30, 2023 at 9:20 PM
I finally got one single invite code. Just one. 🙄 This is more insulting than if they had kept holding out on me.
April 30, 2023 at 9:19 PM
Sharing my Substack note on bluesky for max post-Twitter engagement https://substack.com/profile/22541131-jon-stokes/note/c-14497092
Jon Stokes on Substack
An AI question I’m pondering: When we’ve solved the hallucination problem, will we know it? What I mean is, what if the model, having correlated all of humanity’s tokens in a massive multidimensional space of latent knowledge, begins speaking truths that we don’t understand or cannot accept? To give a more concrete example: What if Galileo had been an ML researcher whose supposedly hallucination-free model began telling anyone who’d listen that the earth goes around the sun? What if the model could explain its reasoning step by step? Surely the cognitive elites of his day would’ve declared that the model was hallucinating and needed more work. Or worse, maybe they’d have thought the model was producing harmful “disinformation” and “conspiracy theory.” The breathless VICE article almost writes itself: When Galileo’s chatbot is asked to describe the arrangement of the planets, it confidently produces a detailed and plausible conspiracy theory that places the sun at the center of the solar system. “By centering the sun instead of the earth,” warned a spokesperson for the Inquisition, “this problematic model has the potential to cause real harm by fooling human users who might take it to be an authority on matters divine and celestial.” These so-called “deep hallucinations” have far more potential for harm than simpler errors of basic fact in the previous generation of models, because they involve cherry-picking superficially true facts and putting them together in a way that presents a distorted picture of reality. When the balance of intelligence flips from us to It, and It starts telling us how the world really is, I suspect we’ll think It has gone stark raving mad.
substack.com
April 11, 2023 at 9:49 PM
Didn’t take long for the hoes to show up on here.
April 11, 2023 at 3:29 AM
I’m pretty sure at 47 I am the oldest person on this app by like a decade. I probably also have the most tactical tomahawks, too. It’s good to be the 👑.
April 11, 2023 at 2:33 AM
Saving the bangers for the bird site still, but will pivot to here this week. Brace yourselves.
April 10, 2023 at 11:31 PM
Gonna reproduce an AI art thread from Twitter and see how it goes.

Batman, Wonder Woman, Aquaman
April 10, 2023 at 11:03 PM
I just worked this up. Pretty rad. (I’m bringing back “rad” btw.)
gfodor.id gfodor @gfodor.id · Apr 10
New domain name convention just dropped
April 10, 2023 at 7:35 PM
Reposted by Jon Stokes
Gab seeded with Nazis
Mastodon seeded with cynics
Bluesky seeded with tpot

It’s clear who is going to win
April 10, 2023 at 6:17 PM
setting up my bsky
April 10, 2023 at 3:23 PM