Jennifer Hu
jennhu.bsky.social
Jennifer Hu
@jennhu.bsky.social
Asst Prof at Johns Hopkins Cognitive Science • Director of the Group for Language and Intelligence (GLINT) ✨• Interested in all things language, cognition, and AI

jennhu.github.io
Pinned
Interested in doing a PhD at the intersection of human and machine cognition? ✨ I'm recruiting students for Fall 2026! ✨

Topics of interest include pragmatics, metacognition, reasoning, & interpretability (in humans and AI).

Check out JHU's mentoring program (due 11/15) for help with your SoP 👇
The department of Cognitive Science @jhu.edu is seeking motivated students interested in joining our interdisciplinary PhD program! Applications due 1 Dec

Our PhD students also run an application mentoring program for prospective students. Mentoring requests due November 15.

tinyurl.com/2nrn4jf9
New work to appear @ TACL!

Language models (LMs) are remarkably good at generating novel well-formed sentences, leading to claims that they have mastered grammar.

Yet they often assign higher probability to ungrammatical strings than to grammatical strings.

How can both things be true? 🧵👇
November 10, 2025 at 10:11 PM
Reposted by Jennifer Hu
It’s grad school application season, and I wanted to give some public advice.

Caveats:
-*-*-*-*


> These are my opinions, based on my experiences, they are not secret tricks or guarantees

> They are general guidelines, not meant to cover a host of idiosyncrasies and special cases
November 6, 2025 at 2:55 PM
Interested in doing a PhD at the intersection of human and machine cognition? ✨ I'm recruiting students for Fall 2026! ✨

Topics of interest include pragmatics, metacognition, reasoning, & interpretability (in humans and AI).

Check out JHU's mentoring program (due 11/15) for help with your SoP 👇
The department of Cognitive Science @jhu.edu is seeking motivated students interested in joining our interdisciplinary PhD program! Applications due 1 Dec

Our PhD students also run an application mentoring program for prospective students. Mentoring requests due November 15.

tinyurl.com/2nrn4jf9
November 4, 2025 at 2:44 PM
Reposted by Jennifer Hu
New preprint!

"Non-commitment in mental imagery is distinct from perceptual inattention, and supports hierarchical scene construction"

(by Li, Hammond, & me)

link: doi.org/10.31234/osf...

-- the title's a bit of a mouthful, but the nice thing is that it's a pretty decent summary
October 14, 2025 at 1:22 PM
At #COLM2025 and would love to chat all things cogsci, LMs, & interpretability 🍁🥯 I'm also recruiting!

👉 I'm presenting at two workshops (PragLM, Visions) on Fri

👉 Also check out "Language Models Fail to Introspect About Their Knowledge of Language" (presented by @siyuansong.bsky.social Tue 11-1)
October 7, 2025 at 1:39 AM
Can AI models introspect? What does introspection even mean for AI?

We revisit a recent proposal by Comșa & Shanahan, and provide new experiments + an alternate definition of introspection.

Check out this new work w/ @siyuansong.bsky.social, @harveylederman.bsky.social, & @kmahowald.bsky.social 👇
How reliable is what an AI says about itself? The answer depends on whether models can introspect. But, if an LLM says its temperature parameter is high (and it is!)….does that mean it’s introspecting? Surprisingly tricky to pin down. Our paper: arxiv.org/abs/2508.14802 (1/n)
August 26, 2025 at 5:59 PM
Due to popular demand, we are extending the CogInterp submission deadline again! 🗓️🥳

Submit by *8/27* (midnight AoE)
Excited to announce the first workshop on CogInterp: Interpreting Cognition in Deep Learning Models @ NeurIPS 2025! 📣

How can we interpret the algorithms and representations underlying complex behavior in deep learning models?

🌐 coginterp.github.io/neurips2025/

1/4
Home
First Workshop on Interpreting Cognition in Deep Learning Models (NeurIPS 2025)
coginterp.github.io
August 22, 2025 at 12:53 PM
🗓️ The submission deadline for CogInterp @ NeurIPS has officially been *extended* to 8/22 (AoE)! 👇

Looking forward to seeing your submissions!
Excited to announce the first workshop on CogInterp: Interpreting Cognition in Deep Learning Models @ NeurIPS 2025! 📣

How can we interpret the algorithms and representations underlying complex behavior in deep learning models?

🌐 coginterp.github.io/neurips2025/

1/4
Home
First Workshop on Interpreting Cognition in Deep Learning Models (NeurIPS 2025)
coginterp.github.io
August 14, 2025 at 1:22 PM
Heading to CogSci this week! ✈️

Find me giving talks on:
💬 Prod-comp asymmetry in children and LMs (Thu 7/31)
💬 How people make sense of nonsense (Sat 8/2)

📣 Also, I’m recruiting grad students + postdocs for my new lab at Hopkins! 📣

If you’re interested in language / cognition / AI, let’s chat! 😄
July 28, 2025 at 4:04 PM
Excited to announce the first workshop on CogInterp: Interpreting Cognition in Deep Learning Models @ NeurIPS 2025! 📣

How can we interpret the algorithms and representations underlying complex behavior in deep learning models?

🌐 coginterp.github.io/neurips2025/

1/4
Home
First Workshop on Interpreting Cognition in Deep Learning Models (NeurIPS 2025)
coginterp.github.io
July 16, 2025 at 1:08 PM
Reposted by Jennifer Hu
Happy to announce the first workshop on Pragmatic Reasoning in Language Models — PragLM @ COLM 2025! 🎉
How do LLMs engage in pragmatic reasoning, and what core pragmatic capacities remain beyond their reach?
🌐 sites.google.com/berkeley.edu/praglm/
📅 Submit by June 23rd
PragLM @ COLM '25
IMPORTANT DATES
sites.google.com
May 28, 2025 at 6:21 PM
Excited to share a new preprint w/ @michael-lepori.bsky.social & Michael Franke!

A dominant approach in AI/cogsci uses *outputs* from AI models (eg logprobs) to predict human behavior.

But how does model *processing* (across layers in a forward pass) relate to human real-time processing? 👇 (1/12)
May 20, 2025 at 2:26 PM
Check out our new work on introspection in LLMs! 🔍

TL;DR we find no evidence that LLMs have privileged access to their own knowledge.

Beyond the study of LLM introspection, our findings inform an ongoing debate in linguistics research: prompting (eg grammaticality judgments) =/= prob measurement!
New preprint w/ @jennhu.bsky.social @kmahowald.bsky.social : Can LLMs introspect about their knowledge of language?
Across models and domains, we did not find evidence that LLMs have privileged access to their own predictions. 🧵(1/8)
March 12, 2025 at 5:43 PM
Reposted by Jennifer Hu
new preprint on Theory of Mind in LLMs, a topic I know a lot of people care about (I care. I'm part of people):

"Re-evaluating Theory of Mind evaluation in large language models"

(by Hu* @jennhu.bsky.social , Sosa, and me)

link: arxiv.org/pdf/2502.21098
March 6, 2025 at 1:33 PM
Reposted by Jennifer Hu
AI models are fascinating, impressive, and sometimes problematic. But what can they tell us about the human mind?

In a new review paper, @noahdgoodman.bsky.social and I discuss how modern AI can be used for cognitive modeling: osf.io/preprints/ps...
March 6, 2025 at 5:39 PM
Some things are more impossible than others. But some things might be even *more impossible* than impossible.

(How) do people differentiate between the inconceivable and the merely impossible? Do language models also make similar distinctions?

Check out our new preprint below!
The Red Queen believed "6 impossible things before breakfast."

But what about *inconceivable* things?

For your breakfast read, check out the new preprint:

"Shades of Zero: Distinguishing Impossibility from Inconceivability"

(by @jennhu.bsky.social , Sosa, & me)

arxiv: arxiv.org/pdf/2502.20469
March 3, 2025 at 4:29 PM
Reposted by Jennifer Hu
Hello! I'm looking to hire a post-doc, to start this Summer or Fall.

It'd be great if you could share this widely with people you think might be interested.

More details on the position & how to apply: bit.ly/cocodev_post...

Official posting here: academicpositions.harvard.edu/postings/14723
February 13, 2025 at 2:07 PM
Reposted by Jennifer Hu
Now hiring for two lab manager positions at Stanford! Hyo Gweon and I are coordinating joint searches since our labs collaborate frequently. Please join us!

careersearch.stanford.edu/jobs/researc...
and
careersearch.stanford.edu/jobs/lab-coo...
February 10, 2025 at 5:05 PM
Reposted by Jennifer Hu
(1/9) Excited to share my recent work on "Alignment reduces LM's conceptual diversity" with @tomerullman.bsky.social and @jennhu.bsky.social, to appear at #NAACL2025! 🐟

We want models that match our values...but could this hurt their diversity of thought?
Preprint: arxiv.org/abs/2411.04427
February 10, 2025 at 5:20 PM
Reposted by Jennifer Hu
LMs need linguistics! New paper, with @futrell.bsky.social, on LMs and linguistics that conveys our excitement about what the present moment means for linguistics and what linguistics can do for LMs. Paper: arxiv.org/abs/2501.17047. 🧵below.
January 29, 2025 at 4:07 PM
Stop by our #NeurIPS tutorial on Experimental Design & Analysis for AI Researchers! 📊

neurips.cc/virtual/2024/tutorial/99528

Are you an AI researcher interested in comparing models/methods? Then your conclusions rely on well-designed experiments. We'll cover best practices + case studies. 👇
NeurIPS Tutorial Experimental Design and Analysis for AI ResearchersNeurIPS 2024
neurips.cc
December 7, 2024 at 6:15 PM
To researchers doing LLM evaluation: prompting is *not a substitute* for direct probability measurements. Check out the camera-ready version of our work, to appear at EMNLP 2023! (w/ @rplevy.bsky.social)

Paper: arxiv.org/abs/2305.13264

Original thread: twitter.com/_jennhu/stat...
October 24, 2023 at 3:03 PM