deen
sir-deenicus.bsky.social
deen
@sir-deenicus.bsky.social
tinkering on intelligence amplification. there are only memories of stories; formed into the right shape, the stories can talk back.
My perhaps optimistic evaluation is these things will eventually all be eliminated. And that it is unknown at what pace and when such abilities will arrive. Better to have gotten an opportunity and lost etc... is how I see it. Even if the odds are long for any single individual born.
November 10, 2025 at 11:05 PM
Such as setting up a journey where the ways forward are dominated by paths that end up in domains ambivalent or worse about the value of a life. It seems defeatist to me, to say childhood cancer means it is better to not exist than to draw motivation to eradicate it. At least that's how I parse it
November 10, 2025 at 11:02 PM
Every inference I can trace out from even a neutral stance leads to either not a good place or a place whose connecting logics are rather perplexing.

As bad as things are, existence is vastly improvable for all in theory; regardless age, most alive prefer they and their loved ones keep existing.
November 8, 2025 at 10:55 PM
Solutions: "brain uploading". Some individuals might choose this option, p increasing with age. Tracks uploaded indivs, uploaded age and space available to provide at least some sense of verisimilitude. Other soltns--within game limits--were larger residence options and accelerated construction.
November 8, 2025 at 10:42 PM
I made a mod for the video game surviving mars that, among many options, allows you to research up to 200 years extended life. This naturally led to overpopulation, homelessness (which led to increased crime) and a jobs problem (also didn't help that automation of all jobs was available). Solution:
November 8, 2025 at 10:28 PM
But for reversibility, this applies at the LLM level for sure and probably better focused on there. LLM is stochastic, can be chaotic but likely rare if at all atm.
November 1, 2025 at 8:17 PM
hmm. not sure about each of stochastic, irreversible, chaotic labels for the transformer itself.
November 1, 2025 at 8:10 PM
Hence why I am predicting these "AI game engines" being constrained to dream logic worlds. Reasoning somewhat reflected in this post (which fairly well predicted how LLMs would evolve 2 years ago by looking at their expressive limitations).

metarecursive.substack.com/p/transforme...
Transformers might be among the most Complex of Simple Processes
Transformers might reach as near the border to complex computational behavior as a decidable system can get
metarecursive.substack.com
October 26, 2025 at 12:35 AM
Hmm high KC doesn't quite capture what I mean. KC vs stable/coherence length? Logical depth too low?

Wait, let me change tack. These are generative models; simulating FSMs or TMs is not their strong suit. Tracking lots of minute variables and changes then propagating conseqs is a core limitation
October 26, 2025 at 12:28 AM
I doubt that'd work, unless the other path's communicated as a non-physical world, eg backrooms, feywild, immaterium, umbra type place

Or you go in and put manual constraints but that's back to programming again--else, nothing stopping these worlds from evolving arbitrarily. Their KC's too high.
October 25, 2025 at 11:05 PM
Refusing to acknowledge because of economics is understandable.

Philosophical: is so much of our humanity really capturable by just a few gigabytes of polygons stitched together? Refusing to acknowledge LLMs is perfectly understandable there too and suggests discontent grounded in deep metaphysics.
October 24, 2025 at 11:22 AM
For the second, I find Douglass Hofstadter has been the most direct and complete in communicating it.

I know I had to do some psyche rearrangement after gemini 2.5 pro (first time an LLM could solve an algorithmic problem I'd struggled with). Even now, i've not completely come to terms with this.
October 24, 2025 at 11:16 AM
That's wrong. There exist soulful resistances.

Economic: what about our jobs (which incidentally, indirectly acknowledges quality in its own way)

metaphysical: this soulless perversion is a travesty that trivializes our humanity, it's not art. This one is rarely communicated in its deepest form.
October 24, 2025 at 11:12 AM
LLMs aren't close to replacing anyone though.
October 24, 2025 at 12:28 AM
Also, if their next model is smarter, and is trained on every token available (which it must be, even if they exclude wiki), the model will notice this encyclopedia forms its own island (that its distribution will not match natural internet data will contribute to this).
October 24, 2025 at 12:17 AM
Shawn Bradley was 7'6" and had 12 seasons in the NBA.

Kareem was 7'2" and played 20 seasons.

So there are precedents, and Wemby's lanky wiryness, flexibility and otherworldly quickness and agility for his size are all huge points in his favor regarding career longevity.
October 23, 2025 at 11:25 PM
@emilyrose.bsky.social is correct. It is meaningless to talk about llms as dying. They exist out of time. Furthermore, for LLM as simulated personas, they aren't emb evolving Hamiltonians of physical states, only inferences to memory states, so time not properly comparable to our notion of it either
October 17, 2025 at 8:31 PM
txt alone. There is no sense of time passing for a transformer (in brains, time tracking occurs at several scales) and there are no dynamical feedback loops--whose discretization in time is non-obvious--to complicate the separation of "mind" states. Everything is already perfectly sliced up in llms
October 17, 2025 at 8:21 PM
You don't need to reach so far. There's a book, permutation city, that provides a good mental framework for this topic.

While it is trivial to formulate the experiments that occur in it for transformers, it's an open question for brains. In a transformer all "state" is recomputable from seeds and
October 17, 2025 at 8:15 PM
A transformer is a stateless feedforward network: no recursion, attention metaoptimization is all 1 step per layer, and there is no concurrence or time dependent processes, or computations where both network wide oscillatory patterns and neuron voltages are/serve information channels.
October 17, 2025 at 7:58 PM
I think this is a valid stance but much more complex and hard to prove. Mechanically, Transformers are the close to the simplest to analyze case.

Brains are asynchronous, have concurrent recurrent dynamical feedback loops and lots of state that transformers do not have and use noise as a resource
October 17, 2025 at 7:49 PM
But its computational class expressible (do note, diffusion models are not more expressive given that their iterations doesn't condition on task complexity) is not relevant to whether it can be taken as operating over sequences or not.
October 16, 2025 at 4:34 PM
Any model trained via on policy RL is being trained to predict over sequences.

Flexibly, P(a|b,c,d,e,...z_n) is not myopic. We have a joint probability model over tokens of seq len = some fraction of listed context size. Highly non-trivial!

As for TC0, with context, an LLM can model a FSM.
October 16, 2025 at 4:28 PM
Interesting. I have spent a lot of time looking into this. There're some small gains to be found but just not many viable options. Can you please link this dissertation, I'd be very curious to see if/how any of its ideas can be applied.
October 16, 2025 at 4:17 PM
Post training with RL does correct for this, and for reasoning corrects even better. The more and better we get at it, the better.

Technicality: because the transformer satisfies markov kernel contract, can use in discrete prob monad with backtracking. An alt underexplored path vs agentic LLM use
October 16, 2025 at 1:49 PM