Shane Littrell, PhD
banner
metacognishane.bsky.social
Shane Littrell, PhD
@metacognishane.bsky.social
https://shanelittrell.com/
Cognitive psychologist. Cornell University postdoc & researcher at Media Ecosystem Observatory. Previously UToronto (Munk School), UMiami, & UWaterloo | Bullshitology, conspiracy beliefs, political propaganda, metacognition 🧠☘️
Facebook out here bluntly reminding me what it feels like to get older:
November 6, 2025 at 9:17 PM
A little over a decade ago today, I was almost killed by a drunk driver on my way home from work. So, this is my annual reminder (whenever I can remember) to ask people to please not drink & drive. Smash chocolate milk or Sour Apple Ryse energy drinks instead.
October 8, 2025 at 1:38 AM
New post is up, discussing two recent examples of organizations trying to bullshit the public. Check it out when you get time!

bullshitology.substack.com/p/down-the-c...
August 27, 2025 at 4:31 PM
Apparently so. 🪦👨‍🔬📈

pubs.aeaweb.org/doi/pdfplus/...
August 4, 2025 at 7:53 PM
And if you gaze for long into an abyss, the abyss gazes also into you...
July 7, 2025 at 10:26 PM
(6/6) Finally, it took one last opportunity to crap all over the Q's that I usually include 😅 before offering a helpful closing summary. Haven't tried any of these suggestions yet but figured I'd post them here in the hopes others might also find them helpful. 🧠
July 3, 2025 at 12:01 AM
(5/6) It also recommended some helpful concrete Q's that I could include in future surveys:
July 3, 2025 at 12:01 AM
(4/6) It also compared closed-ended vs open-ended approaches:
July 3, 2025 at 12:01 AM
(3/6) Next, it generated a list of suggestions for approaches that might work better, with realistic examples and rationales behind each recommendation:
July 3, 2025 at 12:01 AM
(2/6) Here, it describes how my usual open-ended checks don't quite go far enough for helping me identify LLM responses:
July 3, 2025 at 12:01 AM
(1/6) A small but annoying % of ppl use AI to generate fraudulent answers for open-ended survey Qs. So, in an effort to fight 🔥w/ 🔥, I asked GPT-o3 for some suggestions to help me identify LLM responses. Haven't tested them yet but some look promising. Here's a 🧵of what it came up with:
July 3, 2025 at 12:01 AM
I've been on a "Moreover" kick lately in my manuscripts, for some reason. Also, "Consequently" and "Crucially" (which I stole from Gord Pennycook). I probably need to break those writing habits soon before someone accuses me of being an LLM.
May 13, 2025 at 9:52 PM
I don't have much positive or encouraging to say about the state of the world right now (esp. since they just cut ~$1B from our school & I have no idea if my postdoc funding is at risk), so I'm gonna indulge in some homemade choc-chip cookie therapy. Here's my own recipe if you wanna do the same:
April 11, 2025 at 6:24 PM
Greatness takes time. 10 lbs of sweet onions...6 hours later:
April 2, 2025 at 2:06 AM
Also @maxprimbs.bsky.social, if you can use prescreens, I have the drag&drop set up in Qualtrics to kick out ppl who fail it. I also ask another Q that LLMs often fail ("How are you feeling today?" or "What color is your shirt?") that can be set up to kick ppl out too (see pics).
March 25, 2025 at 11:30 PM
Depends on what sample service I'm using. If they allow prescreens, I use a drag-&-drop to weed out ppl who use bots (bots & LLMs usually can't do those...for now). Then I'll stick an ideologically-charged open-ended Q later in the survey to catch anyone who slipped by the prescreen (pic 2).
March 25, 2025 at 11:26 PM
Also look out for ALL CAPS copy/pastes. Disappointed that out of N=250, 15 were potentially fraudulent, which is higher % than I ever got using my botcha prescreens. Hopefully CloudResearch is working on quick solutions or revising TOS to allow anti-fraud prescreen Qs again.
March 24, 2025 at 10:45 PM
Collected data today using CloudResearch Connect. Usually they're good but I wasn't able to use my "botcha" pre-screens this time b/c apparently that's now TOS violation. 🙄 So, I included an open-ended Q to identify responses from LLMs & search bots.

Look out for "Certainly!" It's a LLM red flag.
March 24, 2025 at 10:45 PM
Still my favorite footnote (about a dumb, brain farty mistake on my part): doi.org/10.1002/acp....
March 19, 2025 at 5:07 AM
New pub out today w/ fantastic & mostly BSky-less colleagues at UMiami on predictors of belief in science-related conspiracy theories. This one's OPEN ACCESS and pretty straightforward, so I'll skip the long explanation thread 🙂: spssi.onlinelibrary.wiley.com/doi/10.1111/...
March 17, 2025 at 7:12 PM
"Moral bullshitting" seems like a spicier way of saying "moral hypocrisy." Are they different? Frankfurt DID say that BSing is meant to mislead (he speaks at length about BSing involving "deliberate misrepresent[ation]). The "manipulate opinions..." line comes from "On Truth," not "On Bullshit." :)
March 3, 2025 at 5:59 PM
Sigh...more dipshittery from FedGov (see quoted post). I usually ask the sex/gender question as a two-parter (see pics below). Will this still be acceptable? I'm guessing I'll probably at least have to cut the second part? 😒 If anyone knows or has advice on this, plz lemme know.
February 22, 2025 at 1:55 AM
Hmm...🤔...this feels oddly familiar...😉
February 20, 2025 at 7:34 PM
Everyday, I think, "There's no way he can be this f-ing stupid" and everyday, he posts proof that he's even more stupid than I thought. He doesn't seem to understand international students & workers get SSNs (>1 million/yr), eventually move back home, & their SSNs stay active.
February 17, 2025 at 8:21 PM
Another reason I use CloudResearch Connect. One trick that used to be helpful (not sure now given widespread AI use) is including a Q that shows a pic of a zucchini, eggplant, or cantaloupe & ask ppl what it's called. Each have different names outside US, so replying w/ non-Western name is a VPN🚩
February 5, 2025 at 9:45 PM