Joel Gladd
brehove.bsky.social
Joel Gladd
@brehove.bsky.social
Department Chair of Integrated Studies; Writing and Rhetoric, American Lit; Higher-Ed Pedagogy; OER advocate
In the cycle of enchanted, disenchanted, and re-enchanted with the AI world, I’m briefly stuck in the disenchanted space. Mainly in applications to writing.

I’m relying on these tools for STEM-related tasks, analysis, etc, but becoming very cynical about its use in (non-technical) writing.
March 8, 2025 at 8:44 PM
It's interesting that this has gone under the radar: OpenAI began adopting a constitutional approach to alignment in late 2024, updated last week. Their "deliberative alignment" specs tell the model to treat a list of rules deontologically and deliberate how they apply to particular examples.
February 23, 2025 at 2:49 PM
My recent article explores how colleges can scale AI readiness in First-Year courses, drawing insights from our FYE program at CWI. It also includes links to the training we developed (CC BY). Some of these resources may work in First-Year Writing courses as well.

www.linkedin.com/pulse/how-co...
How Colleges Can Scale AI Readiness: Lessons from a First-Year Experience Program
I recently presented at the 44th Annual Conference on the First-Year Experience and wanted to share what my amazing team (Liza Long, Ed.D.
www.linkedin.com
February 20, 2025 at 2:23 PM
I used the DeepSeek R1 reasoning model to prepare for a new course proposal. These screenshots show with and without the "DeepThink" option turned on--strikingly different. R1 does a lot more synthesis and offers clearer suggestions. It also accepts pdf files, o1 doesn't. Crazy this is open source.
January 22, 2025 at 4:40 PM
What worked so well in this Nigerian experiment with using AI to boost literacy is how carefully each step is overseen by actual teachers. Perhaps the "deskilling" we see in other studies (students losing skills because of too much assistance) is bad strategy. blogs.worldbank.org/en/education...
January 19, 2025 at 6:51 PM

Prompt engineering with o1:

Interesting to compare this o1 strategy with the CLEAR or RFTC framework (role format task constraints)

I currently find myself relying less on “role” and more on “context dump”
January 13, 2025 at 12:47 AM
This is one of the most elegant definitions of LLMs I’ve seen.

(from this post explaining the new o3 model and the ARC benchmark: arcprize.org/blog/oai-o3-...)
December 20, 2024 at 10:49 PM
I'm shopping for broccoli sprout seeds on amazon and the "most helpful review" is 100% AI-generated. It's hard for me to read because half the words are completely pointless--but apparently it's helpful to others! 🤷‍♂️
December 11, 2024 at 11:18 PM
My program is collecting data on this (through surveys) and it somewhat tracks. A small percentage of students are definitely “anti-AI”.

Most students, OTOH, say they’re uncomfortable with faculty using AI to evaluate their work, but they’re comfortable with AI in ed otherwise.
December 3, 2024 at 5:37 PM
I love seeing health gurus compete like this
December 3, 2024 at 4:36 PM
Reposted by Joel Gladd
The anecdote in the Hard Fork podcast may have been intended as—and should definitely be interpreted as—an apt and colorful analogy one researcher perceived between Arrival and the simultaneity of processing in Transformers.

It won't bear much weight as a literal claim about historical causality.
OK, so maybe what Kevin Roose bungled here was that one of the paper's authors (Polosukhin) had made a comparison to "Arrival" ("believed self-attention was a bit like ...") - but without claiming it had been the actual inspiration
www.ft.com/content/37bb...
December 1, 2024 at 6:57 PM
it's weird that the most effective prompt hack STILL is just "approach this in the style of [person]"

these models internalize not just the style but the whole vibe - the writer's voice, epistemic stance, worldview, etc. "write like x" tightens everything up so nicely
November 25, 2024 at 8:50 PM
I'm seeing more AI scanning being done by employers and institutions--checking for GenAI text in resumes, grant applications, etc.

It's odd that "preparing for the workplace" now means both NOT using GenAI for some things but being really savvy at other tasks.

Anyone publishing on this?
November 25, 2024 at 1:34 PM
Not sure why it took me 6 months to discover Max Read's famous article where he coined the term "Zynternet".

article here: maxread.substack.com/p/hawk-tuah-...

perplexity's summary here: www.perplexity.ai/search/zynte...
Hawk Tuah and the Zynternet
Plus, for some reason, some thoughts about the debate
maxread.substack.com
November 25, 2024 at 1:43 AM
I think a lot of debates over ethical use of AI in the classroom would be more productive and all parties first agreed there are many things happening simultaneously. Context is that which is scarce.
AI can help learning... when it isn't a crutch.

There are now multiple controlled experiments showing that students who use AI to get answers to problems hurts learning (even though they think they are learning), but that students who use well-promoted LLMs as a tutor perform better on tests.
November 23, 2024 at 8:35 PM
Reposted by Joel Gladd
If you're depressed that non-expert readers prefer AI-written poetry to the classics, perhaps try Matt's quiz.

You may discover that the real finding here is the huge gulf between your own taste and that of non-expert readers ... a gulf that has likely existed at least since, oh, IA Richards?
I made a Google quiz using the poems listed on the paper’s OSF site if you want to try your luck at guessing which are human and which are LLM. forms.gle/nGCGawDb9c6f...
November 20, 2024 at 9:55 PM
I like this take from Ben on the future of AI in Hollywood because 1) he’s done his homework on transformer models lol and 2) he clearly understands the possibilities without 3) getting lost in hype.
November 18, 2024 at 6:10 PM
Wow so all of academic twitter is now here, interesting.

Nice to see you.
November 14, 2024 at 2:04 PM