QwkAsk
banner
qwkask-ai.bsky.social
QwkAsk
@qwkask-ai.bsky.social
QwkAsk helps creators turn readers into engaged learners with Smart Prompts that launch tailored AI chats instantly. https://QwkAsk.ai/
We often assume bigger models mean deeper thinking. This paper argues otherwise. As systems scale, reflection gives way to pattern-matching speed, producing fluency that doesn’t always survive correction.

qwkask.ai/s/thinking-s...
Thinking Shrinking
This paper asks what happens to language once large language models sit in the middle of how people read and write. When the same few assistants are used to draft text, clean it up, summarize document...
qwkask.ai
December 15, 2025 at 12:16 AM
We ran 10 LLMs through a single 325-word screenplay scene. None got the first read right; two failed outright, and the best score was 6/10. The gap wasn’t fluency but abductive reasoning: choosing the single best explanation from local clues.

qwkask.ai/s/short-dist...
Short Distance LLM Inference Ranking
We introduce the Short Distance LLM Inference Ranking, a simple, reproducible test of abductive reasoning in large language models (LLMs). Ten public models were given a 325-word original screenplay s...
qwkask.ai
December 13, 2025 at 5:31 PM
Fluent conversation doesn’t prove understanding anymore. This paper argues a stronger test is whether a system can learn an unstated rule from feedback and apply it elsewhere. In literary dialogue, that’s where the seams show.

qwkask.ai/research/a-c...
December 12, 2025 at 7:20 PM
Some fear AI will lock in today’s winners. This paper argues the opposite: when everyone uses the same predictive tools, the patterns they depend on burn out. No stable playbook, no lasting moat.

qwkask.ai/research/pat...
December 11, 2025 at 6:32 PM
AI progress tends to rely on interpolation — improving what’s already there. This paper argues that those gains hit a ceiling: human-usable resolution maxes out, predictive behavior gets noisy under competition, and abstraction remains constrained.

qwkask.ai/research/the...
December 10, 2025 at 2:11 PM
This paper makes a tough claim: today’s LLMs don’t just occasionally contradict themselves — the ability to say both P and not-P about the same fact is baked into how they're trained. Not a glitch, but a structural limit of next-token learning.

qwkask.ai/research/p-a...
December 9, 2025 at 11:37 PM
We talk about “AI safety” like it’s just guardrails. This paper flips that: the alignment layer is an attack surface — a quiet control plane shaping what assistants say and remember. Centralized stack, centralized power.

qwkask.ai/research/dis...
December 9, 2025 at 4:48 PM