Schaun Wheeler
banner
schaunwheeler.bsky.social
Schaun Wheeler
@schaunwheeler.bsky.social
Anthropologist + Data Scientist. Cofounder at aampe.com
Did a little search, and "hot shot" apparently refers to a cannonball heated in a furnace (I guess to set targets on fire?). So I don't think that part is a body metaphor. I think it's more saying that Sally is dangerous when she sets her sights on you.
November 14, 2025 at 11:22 PM
We're not running minds. LLMs only reproduce procedural memory - one type of human long-term memory. They don't do semantic or episodic memory, or associative learning. A human mind that only did procedural memory would be as bad at being a mind as LLMs are.
October 27, 2025 at 11:41 PM
I've never really enjoyed Wodehouse. I tried several of his books and thought they were all kinda meh. But I ate up everything from Deeping, Rinehart, Chambers, Morris, Oppenheim, etc. I don’t get Wodehouse’s appeal, given his contemporaries...which makes me feel I'm missing something important.
September 3, 2025 at 11:22 PM
Question (honestly curious, not trying to be snarky): what do you find so perfectly executed about that story? I mean, it's delightful...but seems to be so in the same way as others of the same milieu, and with pacing a bit more stodgy than fits the character/setting.
September 3, 2025 at 11:22 PM
An agent that can’t *choose* its next move isn’t an agent. It’s just a novel interface for the same information retrieval, content management, and marketing automation systems we’ve had for years.
August 25, 2025 at 3:30 PM
Fully agentic systems need a hybrid architecture: a semantic–associative learner that builds and updates long-term user profiles, and a procedural actor that generates fluent, on-brand content (or retrieves it from inventory).
August 25, 2025 at 3:30 PM
Such hacks don’t build conceptual understanding or learn from outcomes. If a user ignores a “20% off” push yesterday, an LLM can draft today’s new message about your return policy - but it won’t autonomously pick that message. No adaptation, no evolving preferences.
August 25, 2025 at 3:30 PM
Recent LLM hacks like retrieval-augmented generation (external info stuffed into the prompt) or session-summary “memory” (re-feeding past interactions) preserve surface continuity, sometimes reduce hallucinations. But they aren’t true semantic/associative memory.
August 25, 2025 at 3:30 PM
Real autonomy requires semantic + associative learning. An agent must consolidate experiences into transferable categories and tie those to outcomes. That’s how it forms opinions on which strategies to pursue or avoid over time.
August 25, 2025 at 3:30 PM
Agency ≠ next-token prediction. A truly agentic system decides *when* and *how* to act without waiting for instructions. LLMs only predict the next token given a prompt. They don’t decide to prompt themselves.
August 25, 2025 at 3:30 PM
Imagine someone who follows every cooking step flawlessly but has no sense of taste, no clue if others liked it, no idea how to improve. Without semantic understanding or feedback associations, true adaptation - and true agency - can’t happen.
August 25, 2025 at 3:30 PM
Those are crucial to human cognition, but we also rely on:
3. Semantic memory = abstract concepts (knowing that “sustainability” is a thing).
4. Associative learning = linking concepts to outcomes (learning that stressing sustainability drives engagement).
August 25, 2025 at 3:30 PM
LLMs excel at two kinds of “thinking”:
1. Procedural memory = automating skills (like writing a sentence or riding a bike).
2. Working memory = juggling info in the moment (like keeping a phone number in mind).
August 25, 2025 at 3:30 PM
It’s hard to move beyond campaigns because they’re simple. They tame messy behavior into tidy segments. But simplicity for you isn’t value for users. An agentic mindset means letting agents manage orchestration’s complexity while analysis stays clear and human-scale.
August 22, 2025 at 4:24 PM
Agentic systems separate orchestration from analysis. Orchestration is about maximizing who could benefit. Analysis is about retrospective learning—what worked, for whom, under which conditions. That separation expands impact without giving up interpretability.
August 22, 2025 at 4:24 PM
That dual role makes campaign logic feel natural even when it’s arbitrary. Consider: “nudge users who haven’t engaged in 30 days.” Why 30? Why not 29, or 1? The threshold isn’t about user needs. It’s a simplification shaped by campaign design, not by actual behavior.
August 22, 2025 at 4:24 PM
Campaigns usually serve two roles at once. First, orchestration: deciding which users get which messages under what conditions, breaking logistics into parts. Second, analysis: measuring outcomes tied to audience, timing, and content. Doing both jobs obscures insight.
August 22, 2025 at 4:24 PM
We don’t need to solve the philosophy of agency. What we need is a performance definition of acting agentically under complexity. Current benchmarks rarely test this, or when they do, they show LLMs fall short. That gap matters more than whether “agency” is solved.
August 21, 2025 at 12:06 PM
Over minutes, mimicry looks convincing. But over hours, days, or weeks, acting agentically means deciding what to do next, why, and how to carry those lessons forward. For that, semantic–associative learning is required. Procedural memory alone isn’t enough.
August 21, 2025 at 12:06 PM
When signals are delayed, goals conflict, or feedback is ambiguous, procedural mimicry fails. Without semantic memory to form abstractions and associative learning to link them to outcomes, systems can’t adapt with consistent success across shifting contexts.
August 21, 2025 at 12:06 PM
In predictable, stable environments with clear feedback, even LLMs can appear agentic. With only procedural and working memory, they give the impression of knowing what they’re doing. But the appearance fades when environments become less structured.
August 21, 2025 at 12:06 PM
In practice, the challenge is building systems whose behavior is hard to distinguish from beings who think for themselves. Acting agentically is a generalized Turing Test: not proving thought, but performing well enough that it looks like intention is present.
August 21, 2025 at 12:06 PM
Because the concept of agency is so unsettled, I think it’s better to sidestep. A system doesn’t need to “have agency” in order to “act agentically.” That distinction matters more than trying to solve the philosophical problem of what agency really is.
August 21, 2025 at 12:06 PM