Tyler Hampton, PhD
tylerbhampton.bsky.social
Tyler Hampton, PhD
@tylerbhampton.bsky.social
(He/Him) All things water: biogeochemistry, ecohydrology, wetlands, wildfires 💧🔥🌲 🐦Queer bird watcher 🌈
Reposted by Tyler Hampton, PhD
PLEASE this.

And also to scientists writing press releases, too - calling something "AI" when it was actually your student spending 12 months fitting and validating a model is disingenuous
Journalist challenge: Use “Machine Learning” when you mean machine learning and “LLM” when you mean LLM. Ditch “AI” as a catch-all term, it’s not useful for readers and it helps companies trying to confuse the public by obscuring the roles played by different technologies. 🧪
November 23, 2025 at 9:12 AM
Reposted by Tyler Hampton, PhD
It's the weekend, baby
November 21, 2025 at 11:10 PM
Reposted by Tyler Hampton, PhD
People learn through practice and challenge; not by receiving answers.

But a deeper problem is that an essential aspect of teaching is helping someone organize their thinking in new ways. LLMs — systems which cannot think or appreciate thinking — are incapable of doing this in a meaningful way.
Relying on ChatGPT to teach you about a topic leaves you with shallower knowledge than Googling and reading about it, according to new research that compared what more than 10,000 people knew after using one method or the other.

Shared by @gizmodo.com: buff.ly/yAAHtHq
November 21, 2025 at 1:29 PM
Reposted by Tyler Hampton, PhD
Relying on ChatGPT to teach you about a topic leaves you with shallower knowledge than Googling and reading about it, according to new research that compared what more than 10,000 people knew after using one method or the other.

Shared by @gizmodo.com: buff.ly/yAAHtHq
November 21, 2025 at 11:48 AM
Reposted by Tyler Hampton, PhD
It’s widely known (and, I think, pretty uncontroversial) that learning requires effort — specifically, if you don’t have to work at getting the knowledge, it won’t stick.

Even if an LLM could be trusted to give you correct information 100% of the time, it would be an inferior method of learning it.
Relying on ChatGPT to teach you about a topic leaves you with shallower knowledge than Googling and reading about it, according to new research that compared what more than 10,000 people knew after using one method or the other.

Shared by @gizmodo.com: buff.ly/yAAHtHq
November 21, 2025 at 12:49 PM
Reposted by Tyler Hampton, PhD
can’t say what’s going through my head right now

But something about US history, Black women, Native women, and Nazis
November 16, 2025 at 8:16 PM
Reposted by Tyler Hampton, PhD
Key Findings: Exposure to a field-specific faculty sexual misconduct incident decreases degree completion in that field by 3.4 percent four years after the incident. This decline is driven by incidents occurring after 2015, among which we observe a 7 percent decline in in-field degree completion.
November 10, 2025 at 1:49 PM
Reposted by Tyler Hampton, PhD
New York Times killed Weinstein and Epstein stories but just asked Ms Rachel if she was funded by Hamas.
November 12, 2025 at 8:50 PM
Reposted by Tyler Hampton, PhD
No no no begs every archivist. You are never going to be able to find anything. Please don’t start using emojis in file names. Who asked for this? What fresh hell is next?
November 12, 2025 at 10:38 AM
Reposted by Tyler Hampton, PhD
"knowledge is humanity's spiritual birthright. Making it, playing with it, revising it, sifting through it, making sense of it -- it is baked into who we are as a species. [...] Every generation is responsible for protecting this legacy for the next generation."

Printing this to hand to my 13yo:
October 23, 2025 at 4:36 PM
Reposted by Tyler Hampton, PhD
This week's Citified features a guest post from Ray Angod where he shares concerns about losing natural spaces in the city, especially wetland areas in and around the Huron Natural Area.
Keeping the “Natural” in Huron Natural Area
Guest post by Ray Angod
buff.ly
October 20, 2025 at 11:45 AM
Reposted by Tyler Hampton, PhD
I don't know how many times this needs to be posted, but clearly it is MANY TIMES.

Sigh.
October 2, 2025 at 6:35 AM
Reposted by Tyler Hampton, PhD
We need professors to point out that dipshits like this are clueless about what goes on in a university: www.forbes.com/sites/shanno...
When Knowledge is Free, What are Professors For?
Higher Education Must Stop Competing with AI on Information and Start Teaching What Machines Can’t Do
www.forbes.com
October 16, 2025 at 9:53 PM
Reposted by Tyler Hampton, PhD
Neurologically speaking, there's no such thing as 'useless' subjects or 'rip-off' degrees

Insisting we should only teach useful subjects is like saying science should only do research that gives positive results

If you honestly believe that's how anything works, *your* education was the wasted one
October 15, 2025 at 9:27 PM
Reposted by Tyler Hampton, PhD
“People who dislike Gen AI are just stuffy bores”

Bitch one of us actually knows how many fingers humans have and evidently it’s not ChatGPT
October 11, 2025 at 5:20 PM
Reposted by Tyler Hampton, PhD
"... which isn't even particularily excessive"

It's 34% over the speed limit. It's the difference between 6/10 pedestrians surviving a collision vs 1-2/10.

Car brain is a hell of a thing.
August 25, 2025 at 12:14 AM
Reposted by Tyler Hampton, PhD
Still remember one study where they had "AI" look at photos to diagnose cancer and it had like a 98% or 99% accuracy...

...except all the photos of actual tumors had a ruler next to them for scale. And the algorithm just got really good at spotting rulers in a photograph.
Stop offloading cognitive tasks to "generative AI." Stop using systems w/ "AI" features either inextricably woven through them or prominently displayed at the top to nudge you into their use. Stop *Designing* "AI" tools & integrations that way. Stop building or using "AI" like this. Fucking Stop it.
AI Eroded Doctors’ Ability to Spot Cancer Within Months in Study
Artificial intelligence, touted for its potential to transform medicine, led to some doctors losing skills after just a few months in a new study.
www.bloomberg.com
August 15, 2025 at 6:00 AM
Reposted by Tyler Hampton, PhD
Abandoning the vision of liberal arts education is at the heart of how fascism has come knocking.

Scientists speaking of abandoning humanities have not learned the core lessons of our history - and by our history I even mean THE HISTORY OF SCIENCE. Science is never separated from society.
August 13, 2025 at 9:19 PM
Reposted by Tyler Hampton, PhD
Academia has now spent several decades trying to drape itself in capitalism - focusing on "job-oriented" majors, talking big about connections to industry, and conversion of student loan $s to income down the lane.
August 13, 2025 at 9:18 PM
Reposted by Tyler Hampton, PhD
I keep seeing this circulating, and I'm disappointed in my colleagues in the natural sciences who think if they can "shed" enough of their humanity they will some how be protected from fascism.

Scientists who are bending to jingoism to try and protect funding aren't asking the right questions.
In which a Yale prof calls for jettisoning humanities to make way for science-only universities.

“scientists… are being punished for the sins of [humanities scholars] because we all live under one roof. I cannot see a compelling reason for our continued cohabitation.”
Unyoke the Sciences From the Humanities
Arts and sciences typically cohabitate. Should they?
thedispatch.com
August 13, 2025 at 9:16 PM
Reposted by Tyler Hampton, PhD
This from @tressiemcphd.bsky.social hit me over the head like a mallet of truth. This is the thing. This is what I’ve been trying to warn people about perfectly crystallized. www.nytimes.com/2025/08/12/o...
August 12, 2025 at 12:03 PM
Reposted by Tyler Hampton, PhD
Using AI is bad scholarship; passing it off as your own work is academic misconduct.
We have just rejected an(other) article for the use of AI. The AI had hallucinated a publication apparently written by a certain J.E. Richardson, and another (by G. Mautner) apparently published in CDS 16(2). Funnily enough, I recognised both were fictional.
A reminder: AI generates slop.
July 31, 2025 at 7:40 AM
Reposted by Tyler Hampton, PhD
Cannot emphasize enough that, if you dislike doing the things academics do (reading, writing, developing and expressing opinions about things you've read) enough to try to make a chatbot do them for you, there are lots of actual humans who would be happy to take that unpleasant job off your hands
Got gossip from a managing editor pal of a journal that a few reviewers used ChatGPT to write their assessments for them, and didn't even conceal it. I'll say it again - it won't be the neoliberal administrators that take down the humanities. It'll be humanities professors who've stopped reading.
July 1, 2025 at 7:44 PM
Reposted by Tyler Hampton, PhD
If a human told you things that were correct 80% of the time but claimed, flat out, with absolute confidence, that they were correct 100% of the time, you would dislike them & never trust a word they say. All I'm really suggesting is for people to treat chatbots with that same distrust & antagonism.
June 19, 2025 at 4:05 PM