Matthew Chalmers
banner
matthewchalmers.bsky.social
Matthew Chalmers
@matthewchalmers.bsky.social
Computer scientist into Ubicomp, HCI, theory and (a long time ago) data visualisation. Also kind of keen on mountain things, fine food things, and fine food in the mountains.
Reposted by Matthew Chalmers
2025 was supposed to be the Year of the A.I. Agent. According to @garymarcus.bsky.social, however, AI agents have been a dud. "They’re building clumsy tools on top of clumsy tools," the author of "Taming Silicon Valley" told Cal Newport for the @newyorker.com:
Why A.I. Didn’t Transform Our Lives in 2025
This was supposed to be the year when autonomous agents took over everyday tasks. The tech industry overpromised and underdelivered.
www.newyorker.com
January 9, 2026 at 6:28 PM
Reposted by Matthew Chalmers
Oh come on. What an absurd bit of PR. Do first responders and air traffic controllers need to melt several thousand nvidia GPUs to send data and voice??? Just completely shameless!!
Big Tech is objecting to proposals that would require data centers to disconnect from the grid during periods of peak demand so ratepayers aren't forced to pay for grid expansion. Note the deceptive PR about how always-on data centers are needed for first responders & air traffic controllers.
The Fight Over Making Data Centers Power Down to Avoid Blackouts
Power-grid operators are asking tech companies to supply their own electricity—or go dark at times—but many are pushing back.
www.wsj.com
January 7, 2026 at 4:30 PM
Reposted by Matthew Chalmers
Just excellent and all they deserve.

www.ft.com/content/ad94...
Who’s who at X, the deepfake porn site formerly known as Twitter
A look inside Elon Musk’s big tent
www.ft.com
January 6, 2026 at 4:51 PM
Reposted by Matthew Chalmers
Reposted by Matthew Chalmers
As @jbakcoleman.bsky.social and I wrote, “Every time a scientist abdicates their work to an AI tool, that is a tacit admission that the work is not worth being done by the scientist.”

Same goes for instructors.
You’ll have to read for the what happens next part, but I can tell you what will happen when parents and students realize profs have handed over expertise and teaching to chatbots.
NYU professor tested students with AI oral exams, here's what happened next
When student work looked like McKinsey memos, an NYU business school professor used AI oral exams to test real learning.
www.businessinsider.com
January 5, 2026 at 5:14 PM
Reposted by Matthew Chalmers
The video that will make your day better: A post-American, enshittification-resistant internet by @doctorow media.ccc.de/v/39c3-a-pos...
A post-American, enshittification-resistant internet
Trump has staged an unscheduled, midair rapid disassembly of the global system of trade. Ironically, it is this system that prevented all...
media.ccc.de
January 5, 2026 at 2:01 PM
Reposted by Matthew Chalmers
As we've been saying for a while: the "backlash" against climate policies has been wildly, absurdly overstated. This new survey shows how UK MPs badly understate support for climate policies

www.theguardian.com/environment/...
January 5, 2026 at 3:14 PM
Reposted by Matthew Chalmers
This excellent film may start a little slow, but stick with it. It's a powerful short documentary about neo-colonial exploitation in the digital economy.
An incredible and beautiful short documentary by Nicolas Gouralt who interviewed online workers from Venezuela, Kenya and the Philippines who annotate images to train A.I. systems.

Here is as gift link.

www.nytimes.com/2026/01/02/o...
January 4, 2026 at 8:25 AM
Reposted by Matthew Chalmers
There are a hundreds of LinkedIn accounts paying 25 euros for a GenAI "movie trailer" based on their LinkedIn profiles (hard to calculate ⚡cost but 15 3sec clips, maybe 10 - 20 KWh, 2 days for a large UK household)

Possibly worse: most of them feature THIS SCENE from Django Unchained:
January 3, 2026 at 7:09 PM
Reposted by Matthew Chalmers
1. Headlines everywhere today read "Grok apologizes."

This is bullshit. A chatbot is not something that can apologize.

Pretending otherwise is simple laundering these companies' bullshit about what AI is, while diffusing blame away from the human beings that developed and released this system.
January 3, 2026 at 12:12 AM
Reposted by Matthew Chalmers
I wish it was more widely known that Google fought like absolute mongrels against media outlets to avoid disclosing water consumption data - the only reason they disclose that data now is because they were pressured by activists and journalists

cloud.sustainability.watch/explore-issu...
January 2, 2026 at 9:22 PM
Tee hee hee.
January 2, 2026 at 9:55 PM
Reposted by Matthew Chalmers
Listen to Catharina Doria. I promise, it is good
This Brazilian woman hates AI
January 1, 2026 at 5:52 PM
Reposted by Matthew Chalmers
Happy new year everyone. As always, here's a link to my faves from last year. Latest at the top, scroll down to move through the year.

Link:
www.stevecarter.com/latest/lates...
January 1, 2026 at 3:57 AM
Madness. And/or evil.
Bob and I were among the hundreds of researchers that were supposed to conduct the 6th U.S. National Climate Assessment. Now it looks like they're gonna produce it with a few people and Grok? Communities need rigorous and accurate information about climate change. This will put communities at risk.
January 1, 2026 at 7:39 AM
Reposted by Matthew Chalmers
We got Meta’s “general global playbook” for defeating advertiser verification regulations, which the company knows would reduce scams. It includes making scam ads “not findable” for regulators searching Meta’s ad library through targeted scrubbing.

www.reuters.com/investigatio...
Meta created ‘playbook’ to fend off pressure to crack down on scammers, documents show
As regulators pressure Meta to verify the identity of advertisers on Facebook and Instagram, the social media giant has drafted a “playbook” to stall them. A Reuters investigation examines its tactics...
www.reuters.com
December 31, 2025 at 2:38 PM
Reposted by Matthew Chalmers
"When we give credence to the idea of AGI .. it signals that a computer program that is proficient at … predicting words from other words … can do important social and economic work, such as addressing gaps in major social services, doing science autonomously, and “solving” climate change.”
The Myth of AGI | TechPolicy.Press
Alex Hanna and Emily M. Bender write that claims of "Artificial General Intelligence" are a cover for abandoning the current social contract.
www.techpolicy.press
December 30, 2025 at 10:09 PM
Reposted by Matthew Chalmers
I hate these charts so much because they imply chatgpt is comparable to the internet or phones

You could create the same graphic for full screen pop-up advertising on websites and make them look like ultra-rapid technology adoption when really they were just baked unavoidably into the internet
In 2025, AI became pervasive in American life and the economy, with ChatGPT surging in adoption much faster than any other major technology in memory. @nytopinion.nytimes.com
December 30, 2025 at 7:38 AM
Impressive work.
Just watched the new Knives Out and I think it's really important you know that the scene in the Seminary's Gym is filmed in the same place Rick Astley filmed the music video for Never Gonna Give You Up.

I saw the window tracery and immediately made my friends pause the film so I could tell them.
December 29, 2025 at 8:10 PM
Reposted by Matthew Chalmers
"We should be destroying these things whenever we see them"

Thank you Seth Rogen
December 29, 2025 at 8:32 AM
Reposted by Matthew Chalmers
This is absolutely wild, and super important.

There are zillions of studies claiming that fMRI signals indicate increased brain activity, and it looks like that's often just wrong.

If confirmed, this means we've misinterpreted a lot of research.
"40 percent of MRI signals do not correspond to actual brain activity"; "Since tens of thousands of fMRI studies worldwide are based on this assumption, our results could lead to opposite interpretations in many of them.”
www.tum.de/en/news-and-...
40 percent of MRI signals misinterpreted
Interpretation of numerous MRI data may be incorrect: blood flow is not a reliable indicator of brain activity.
www.tum.de
December 28, 2025 at 10:00 AM
Reposted by Matthew Chalmers
Remember:

When Tylenol was poisoned *by an outsider* and killed people, the company recalled all their products & redesigned them.

When Intel’s Pentium had a bug so obscure it affected 1 in _9 billion_ long division calculations, they recalled their chips.

ChatGPT was made deadly *by its team*.
i don’t see how any of the benefits of chatgpt and other consumer-facing LLMs can possibly outweigh their incredible ability to induce suicides
Adam Raine’s life hurtled toward tragedy soon after he began talking with ChatGPT about homework. Analysis of his ChatGPT account shows how the chatbot became a confidant as he planned to end his life.
December 28, 2025 at 6:21 PM
Reposted by Matthew Chalmers
The surge of AI slop, deceptive synthetic videos & images flooding our feeds, is driven in large part by Agentic AI Accounts (AAAs). aiforensics.org/work/agentic...

Key findings:
-Over 43,000 mostly AI-made posts generated 4.5 billion views
-More than 65% of accounts were created in early 2025

1/
aiforensics.org
December 27, 2025 at 6:33 PM