Dr Brett H Meyer
banner
bretthmeyer.bsky.social
Dr Brett H Meyer
@bretthmeyer.bsky.social
We live because everything else does.
—Richard Wagamese

He/him, Professor of ECE, researching hardware-software co-design of machine learning systems; 🇺🇸 in 🇨🇦! Views expressed are my own.

https://rssl.ece.mcgill.ca/
Pinned
There’s something delightful about starting over.
Make an exenniel feel old with three words: 30th anniversary edition.

open.spotify.com/album/6nrppj...
Mellon Collie And The Infinite Sadness (30th Anniversary Edition)
open.spotify.com
November 21, 2025 at 2:51 PM
Reposted by Dr Brett H Meyer
New research supporting the radical proposition that trying to learn something works better than not trying.

theconversation.com/learning-wit...
Learning with AI falls short compared to old-fashioned web search
Doing the mental work of connecting the dots across multiple web queries appears to help people understand the material better compared to an AI summary.
theconversation.com
November 21, 2025 at 1:29 PM
Reposted by Dr Brett H Meyer
It’s widely known (and, I think, pretty uncontroversial) that learning requires effort — specifically, if you don’t have to work at getting the knowledge, it won’t stick.

Even if an LLM could be trusted to give you correct information 100% of the time, it would be an inferior method of learning it.
Relying on ChatGPT to teach you about a topic leaves you with shallower knowledge than Googling and reading about it, according to new research that compared what more than 10,000 people knew after using one method or the other.

Shared by @gizmodo.com: buff.ly/yAAHtHq
November 21, 2025 at 12:49 PM
Welp, you can either 1) stop Google from using all your email to train models, or 2) you can use spellcheck.
If you use GMail, AI (Gemini) was turned on yesterday by default and now scans all of your content for machine learning. To turn off, go to Settings>General and scroll down. Uncheck the box for "Smart features."

There's other "Smart" add-ons as well, but that's the one that reads your content.
November 20, 2025 at 11:59 PM
Reposted by Dr Brett H Meyer
These are not serious people.
November 19, 2025 at 9:05 AM
Reposted by Dr Brett H Meyer
Academia is a really racist and sexist place. Larry Summers, as the president of Harvard, epitomized that racism and sexism but he is not unique. That's why we needed DEI.
November 18, 2025 at 3:48 PM
Reposted by Dr Brett H Meyer
I'll believe the Democrats give a fuck and this isn't just about ✨political scandal points✨ when 𝑎𝑙𝑙 𝑜𝑓 𝑡ℎ𝑒𝑚 completely denounce and cut all ties with 𝑒𝑣𝑒𝑟𝑦𝑜𝑛𝑒 in these chats and start standing ten toes down for the 𝑣𝑖𝑐𝑡𝑖𝑚𝑠 . No matter who it is. No matter what.
November 16, 2025 at 12:39 AM
Reposted by Dr Brett H Meyer
"It's like asbestos" is the perfect analogy.

Useful in some contexts, dangerous and unhealthy in others we won't even know about for decades. *perfect*
Love to see community action against this AI nonsense! neighborhoodview.org/2025/11/13/d...
November 15, 2025 at 8:08 PM
I’d like the “too close to the sun” package, please! Thank you.
November 14, 2025 at 2:37 AM
Reposted by Dr Brett H Meyer
Every ad now
November 13, 2025 at 5:38 PM
Reposted by Dr Brett H Meyer
New: Google has chosen a side in Trump's mass deportation campaign. Google is hosting a CBP facial recognition app to hunt immigrants; no indication Google will remove. At same time Google takes down apps for reporting ICE sightings

“Big tech has made their choice”

www.404media.co/google-has-c...
Google Has Chosen a Side in Trump's Mass Deportation Effort
Google is hosting a CBP app that uses facial recognition to identify immigrants, while simultaneously removing apps that report the location of ICE officials because Google sees ICE as a vulnerable gr...
www.404media.co
November 13, 2025 at 2:11 PM
Reposted by Dr Brett H Meyer
We live in capitalism. Its power seems inescapable. So did the divine right of kings. Any human power can be resisted and changed by human beings. Resistance and change often begin in art, and very often in our art, the art of words.
November 11, 2025 at 10:15 PM
When you wake up and think that it’s a warm January day rather than a cold November day.
November 11, 2025 at 9:35 PM
Reposted by Dr Brett H Meyer
I am simply unwilling to rely on these black boxes to generate anything on my behalf or for my use, nor am I willing to feed it my thinking or writing in service of unstated goals and potentially immoral uses. Some of the generative AI tools out there are potentially useful, sure, but at what cost?
“What ChatGPT says about politics — or anything — is ultimately what the people who created it say it should say, or allow it to say; more specifically, human beings at OpenAI are deciding what neutral answers to those 500 prompts might look like and instructing their model to follow their lead.”
Elon Musk’s Grokipedia Is a Warning
The centibillionaire’s Wikipedia clone is ridiculous. It’s also a glimpse of the future.
nymag.com
November 11, 2025 at 2:08 PM
Honestly, sometimes I really miss my math dork days …
Haha, this from the New Yorker is getting passed around the math dork community. I did a comic about this kind of thought a few years ago: www.smbc-comics.com/comic/commut...
November 8, 2025 at 4:23 PM
Reposted by Dr Brett H Meyer
Haha, this from the New Yorker is getting passed around the math dork community. I did a comic about this kind of thought a few years ago: www.smbc-comics.com/comic/commut...
November 7, 2025 at 5:26 PM
A few days ago I watched a movie that was prefaced by the threat of fine and imprisonment if the copyrighted work was pirated. I guess we’ll see if under late-stage capitalism theft as exploitation is protected; I’m not optimistic that OpenAI will face justice.
OpenAI pirated large numbers of books and used them to train models.

OpenAI then deleted the dataset with the pirated books, and employees sent each other messages about doing so.

A lawsuit could now force the company to pay $150,000 per book, adding up to billions in damages.
November 4, 2025 at 4:02 PM
Reposted by Dr Brett H Meyer
The poor work more than the rich. This is so obvious it shouldn't need to be said, but American ideologies of meritocracy and bootstrapping obfuscate the realities of poverty. No one works harder than the poor.

It's also expensive to be poor, but that's another post for another day.
If you make 130% of the federal poverty level, you qualify for SNAP.

A family of four, making $40,560, qualifies for SNAP benefits meaning thousands of Missouri teachers qualify.

I’m tired of the bullshit. Most of the people who receive snap benefits work.
October 28, 2025 at 5:39 PM
Reposted by Dr Brett H Meyer
I am not a "tech critic". I am an antifascist, a feminist, an anticapitalist, an engineer. My criticism of tech flows from my politics and values. Not from a desire to save or destroy tech. Tech is an expression of power and that's what the whole conversation is about.
October 27, 2025 at 3:53 PM
As an AI researcher that is more often critical of AI than not, this, a million times this.
I don't actually hate AI, I just happen to be convinced that capitalism is gonna use it for some bad shit 🤷
October 27, 2025 at 6:59 PM
I don’t like how often I find myself having this conversation.
What if we did a single run and declared victory
October 23, 2025 at 2:53 AM
I’m grateful for researchers doing the hard work to study the consequences of AI-in-the-research-loop. In cognitive science:
Can AI simulations of human research participants advance cognitive science? In @cp-trendscognsci.bsky.social, @lmesseri.bsky.social & I analyze this vision. We show how “AI Surrogates” entrench practices that limit the generalizability of cognitive science while aspiring to do the opposite. 1/
AI Surrogates and illusions of generalizability in cognitive science
Recent advances in artificial intelligence (AI) have generated enthusiasm for using AI simulations of human research participants to generate new know…
www.sciencedirect.com
October 22, 2025 at 1:19 PM
Reposted by Dr Brett H Meyer
“The delegation of tasks to “tools & assistants” constitutes a methodological decision (…) Researchers should therefore be required to explain why they are trusting a black box that is neither open nor fair.”
@altibel.bsky.social & @petertarras.bsky.social

www.leidenmadtrics.nl/articles/why...
Why AI transparency is not enough
Recently, a taxonomy to disclose the use of generative AI (genAI) in research outputs was presented as an approach that creates transparency and thereby supports responsible genAI use. In this post we...
www.leidenmadtrics.nl
October 15, 2025 at 11:57 PM
Reposted by Dr Brett H Meyer
I’d rather pay for a thousand undocumented workers’ ER visits than a single missile fired at a Venezuelan fishing boat.
Leavitt: "When an illegal alien goes to the emergency room, who's paying for it? The American taxpayer."
October 3, 2025 at 6:26 PM
Reposted by Dr Brett H Meyer
I am seeing news that AI companies face “unexpected” obstacles in scaling up their AI systems.

Not unexpected at all, of course. Completely predictable from the Ingenia theorem.
The intractability proof (a.k.a. Ingenia theorem) implies that any attempts to scale up AI-by-Learning to situations of real-world, human-level complexity will consume an astronomical amount of resources (see Box 1 for an explanation). 13/n
May 17, 2025 at 8:48 AM