john44234.bsky.social
john44234.bsky.social
@john44234.bsky.social
Reposted by john44234.bsky.social
look at this shit!
October 18, 2025 at 11:39 PM
Reposted by john44234.bsky.social
i think "how much and what information is going in and out" is important but where your thinky bits are relative to your body or whether the body really exists aren't
August 29, 2025 at 4:09 AM
Reposted by john44234.bsky.social
because whatever is different about human cognition -- whatever separates is from other apes -- seems to relate to the very strong frontal suppression of what are naively very useful instincts. in other words, intelligence seems to be a form of exaptive generalization from specialized capabilities.
August 8, 2025 at 5:31 AM
Reposted by john44234.bsky.social
specifically, i think that what people describe as "qualia" is coactivation of naively unrelated concepts due to their similarity when projected down to a low-dimensional representation
July 22, 2025 at 5:16 PM
Reposted by john44234.bsky.social
hunyuan video (the premier off-the-shelf video diffusion model) comes with a suite of IPadapters which allow you to swap in consistent characters and bind action in the scene to existing video in a way which controls the morphing and clipping which is right now diagnostic of most video models.
May 19, 2025 at 7:15 PM
Reposted by john44234.bsky.social
i think that the answer to "is it intelligence" is neither "no" nor "yes." if you have the background to read it, the Transformer Circuits series of papers by Anthropic are some of the most interesting things out there.
transformer-circuits.pub
Transformer Circuits Thread
Can we reverse engineer transformer language models into human-understandable computer programs?
transformer-circuits.pub
May 7, 2025 at 6:34 PM
Reposted by john44234.bsky.social
on the puritan left because i don’t want to discourse with a guy who thinks i’m subhuman
March 22, 2025 at 7:30 PM
Reposted by john44234.bsky.social
llms are a neat trick that lets you train sequence to sequence while executing auto-generatively. diffusion models are a neat trick to learn a distribution to distribution map.

the loudest ai skeptics are not at all interested in why these models work so well despite their simplicity
November 14, 2024 at 6:30 AM
Reposted by john44234.bsky.social
I am a broken record on this but LLM text embeddings are an incredible breakthrough, and the ability for anyone to build pretty good classifiers with structured output could be insanely useful.

Trying to build NLP interfaces is taking my team an extremely long time and is extremely brittle
November 13, 2024 at 3:49 PM