Jackson Petty
banner
jacksonpetty.org
Jackson Petty
@jacksonpetty.org
the passionate shepherd, to his love • ἀρετῇ • מנא הני מילי
Surely a third account would help to clarify the matter
October 29, 2025 at 12:38 PM
lmao were you also on the 12:17 to GC?
October 5, 2025 at 6:29 PM
Ad infinitum
October 5, 2025 at 5:28 PM
Kauaʻi is amazing
June 22, 2025 at 12:15 PM
Thanks to my wonderful co-authors: @michahu.bsky.social, Wentao Wang, @shauli.bsky.social, @lambdaviking.bsky.social, and @tallinzen.bsky.social! Paper, dataset, and code at jacksonpetty.org/relic/
June 9, 2025 at 6:02 PM
Eg: general direction following, or translation of *natural* languages based only on non-formal reference grammars. Our results here show that there is no a priori roadblock to success, but that there are overhangs between what models can do and what they actually do.
June 9, 2025 at 6:02 PM
2. It’s natural to ask “well, why not just break out to tool use? Parsers can solve this task trivially.” That’s true! But I think it’s valuable to understand how formally-verifiable tasks can shed light on model behavior on tasks for which aren’t formally verifiable.
June 9, 2025 at 6:02 PM
This is contrary to the view that failure means “LLMs can’t reason”—failure here is likely correctable, and hopefully will make models more robust!
June 9, 2025 at 6:02 PM
Why is this important? Well, two main reasons:
1. The overhang between models’ knowledge of *how* to solve the task and their ability to follow through gives me hope that we produce models which are better at following complex instructions in-context.
June 9, 2025 at 6:02 PM
So, what did we learn?
1. LLMs *do* know how to follow instructions, but they often don’t
2. The complexity of instructions and examples reliably predicts whether (current) models can solve the task
3. On hard tasks, models (and people, tbh) like to fall back to heuristics
June 9, 2025 at 6:02 PM
But often models get distracted by irrelevant info, or “get lazy” and choose to rely on heuristics rather than actually verifying the instructions; we use o4-mini as an LLM judge to classify model strategies: as examples get more complex, models shift to relying on heuristics rather than rules:
June 9, 2025 at 6:02 PM
So, how can LLMs succeed at this task, and why do they fail when grammars and examples get complex? Well, models in general do understand the general solution: even small models recognize they can build a CYK table or do an exhaustive top-down search of the derivation tree:
June 9, 2025 at 6:02 PM
In general, we find that models tend to agree with one another on which grammars (left) and which examples (right) are hard, though again 4.1-nano and 4.1-mini pattern with each other against others. These correlations increase with complexity!
June 9, 2025 at 6:02 PM
Interestingly, models’ accuracies are reflective of divergent class biases: 4.1-nano and 4.1-mini love to predict strings as being positive, while all other models have the opposite bias; these biases also change with example complexity!
June 9, 2025 at 6:02 PM
What do we find? All models struggle on complex instruction sets (grammars) and tasks (strings); the best reasoning models are better than the rest, but still approach ~chance accuracy when grammars (top) have ~500 rules, or when strings (bottom) have >25 symbols.
June 9, 2025 at 6:02 PM
We release the static dataset used in our evals as RELIC-500, where the grammar complexity is capped at 500 rules.
June 9, 2025 at 6:02 PM
We introduce RELIC as an LLM evaluation: 1. generate a CFG of a given complexity; 2. sample positive (parses) and negative (doesn’t parse) strings from the grammar’s terminal symbols; 3. prompt the LLM with a (grammar, sample) pair and ask it to classify if the grammar generates the given string
June 9, 2025 at 6:02 PM
As an analogue for instruction sets and tasks, formal grammars have some really nice properties: they can be made arbitrarily complex, we can sample new ones easily (avoiding problems with dataset contamination), and we can verify a model’s accuracy using formal tools (parsers).
June 9, 2025 at 6:02 PM
LLMs are increasingly used to solve tasks “zero-shot,” with only a specification of the task given in a prompt. To evaluate LLMs on increasingly complex instructions, we turn to a classic problem in computer science and linguistics: recognizing if a formal grammar generates a given string.
June 9, 2025 at 6:02 PM
Code, dataset, and paper at jacksonpetty.org/relic/
June 9, 2025 at 6:02 PM
Such a shame that Apple doesn’t have much cash on hand for such expenditures
May 23, 2025 at 10:33 PM