Max Woolf
banner
minimaxir.bsky.social
Max Woolf
@minimaxir.bsky.social
Senior Data Scientist at BuzzFeed in San Francisco // AI content generation ethics and R&D // plotter of pretty charts

https://minimaxir.com
Pinned
New blog post up: I spent a lot of time researching Nano Banana, Google's new generative AI model, and not only is it substantially better than ChatGPT, it is capable of taking extremely nuanced prompts even thousands of tokens long to generate exactly what you want. minimaxir.com/2025/11/nano...
Nano Banana can be prompt engineered for extremely nuanced AI image generation
Nano Banana allows 32,768 input tokens and I’m going to try to use them all dammit.
minimaxir.com
Reposted by Max Woolf
New blog post up: I spent a lot of time researching Nano Banana, Google's new generative AI model, and not only is it substantially better than ChatGPT, it is capable of taking extremely nuanced prompts even thousands of tokens long to generate exactly what you want. minimaxir.com/2025/11/nano...
Nano Banana can be prompt engineered for extremely nuanced AI image generation
Nano Banana allows 32,768 input tokens and I’m going to try to use them all dammit.
minimaxir.com
November 13, 2025 at 5:40 PM
New blog post up: I spent a lot of time researching Nano Banana, Google's new generative AI model, and not only is it substantially better than ChatGPT, it is capable of taking extremely nuanced prompts even thousands of tokens long to generate exactly what you want. minimaxir.com/2025/11/nano...
Nano Banana can be prompt engineered for extremely nuanced AI image generation
Nano Banana allows 32,768 input tokens and I’m going to try to use them all dammit.
minimaxir.com
November 13, 2025 at 5:40 PM
LLMs when you set temperature = 2.0

"skin as part of brain?"

This is that great intellect people talk about?
November 13, 2025 at 5:34 AM
November 13, 2025 at 12:41 AM
My 25 minute Nano Banana blog post will be up tomorrow, in which I highlight a funny failure state.
November 12, 2025 at 10:02 PM
OpenAI's announcement of GPT-5.1 immediately triggered the flame war detector on Hacker News, which is fun.
November 12, 2025 at 7:41 PM
Reposted by Max Woolf
i am begging every skeptic to find some middle ground, any middle ground at all, between rejecting speculation as sci-fi and refusing to speculate even slightly about what happens next
A relatively small number of people in certain jobs say that ChatGPT and other LLMs have made them more productive at work. But in the overall economy, it does not look like net productivity is up.

Most of the supposed value is in sci-fi speculation. “Imagine a machine that cures cancer.”
I honestly don’t get the value of this company. They hoover up energy and water. Their product constantly gets things wrong and, in extreme cases, coaches people into suicide.

And it’s all built on what seems to be malicious and vast intellectual property theft.

What does OpenAI offer the world?
November 9, 2025 at 4:52 PM
i kinda want to try "this is your last chance to dodge a block" as a prompt engineering technique.
if i ever reach this point then just put me down
November 7, 2025 at 4:19 PM
Reposted by Max Woolf
Sure, an individual subscriber of a blocklist is not enduring a significant harm if they are missing content from one person due to a false positive. An individual person who is unable to appeal being structurally blocked by unaccountable parties could be, though. And that does happen.
November 3, 2025 at 10:51 PM
Reposted by Max Woolf
totally agree with Will - but Bluesky is well past the point of "an open commons".

most of its highly engaged audience prioritize their own local communities sensitivities vs shared discourse

most of the users who might see margaret or giada's complaints here would see it as a feature, not a bug
Blocking hatemongers is one thing but this trend of using mass block lists to avoid hearing from people who might or might not hold points of view you expect to disagree with is unhealthy imo and hurts the platform.

Bluesky was better off with folks like Margaret Mitchell and Giada Pistilli on it.
November 3, 2025 at 7:29 PM
quote tweets here are very very funny
Blocking hatemongers is one thing but this trend of using mass block lists to avoid hearing from people who might or might not hold points of view you expect to disagree with is unhealthy imo and hurts the platform.

Bluesky was better off with folks like Margaret Mitchell and Giada Pistilli on it.
November 3, 2025 at 7:08 PM
Reposted by Max Woolf
It’s real
November 2, 2025 at 7:21 PM
I posted on /r/sanfrancisco how my Safeway redesigned so you can't leave without triggering an alarm unless you buy something and a surprising number of replies are "just trigger the alarm nerd"

I'm a terrible San Franciscan.
November 2, 2025 at 1:45 AM
Reposted by Max Woolf
if the dislikes are attributable (meaning you can see who disliked stuff) it’s going to cause a lot of drama and likely see negative patterns on large group engagement

if the dislikes are hidden, it’s going to cause a lot of drama but mostly distrust towards the network & admins

so win/win
“As users ‘dislike’ posts, the system will learn what sort of content they want to see less of. This will help to inform more than just how content is ranked in feeds, but also reply rankings.”
Bluesky hits 40 million users, introduces 'dislikes' beta | TechCrunch
As users "dislike" posts, the system will learn what sort of content they want to see less of. This will help to inform more than just how content is ranked in feeds, but also reply rankings.
techcrunch.com
October 31, 2025 at 11:06 PM
Reposted by Max Woolf
I'm anti-anti-anti AI I think. It probably could be used in responsible ways for many applications, but it's appropriate that it face stiff headwinds when doing so to keep the people who want to use it honest. The skepticism is healthy.
October 31, 2025 at 6:56 PM
It's annoying that self-aware funny instances of AI context collapse now are filled with replies of THAT'S WHAT YOU GET FOR USING AI YOU IDIOT
October 30, 2025 at 5:57 PM
did Meta really have to embargo this

and who sets an embargo at X:45
October 30, 2025 at 5:14 PM
Reposted by Max Woolf
Meta stock drops 10% after Q3 earnings call due to a $15.9B expense for hiring 4 AI researchers
October 29, 2025 at 10:15 PM
yeah I kinda expected that much from Grokipedia
October 27, 2025 at 9:17 PM
Very Belated August + September Update on my Patreon www.patreon.com/posts/142112...
October 26, 2025 at 6:40 PM
Assuming that they are already embedding every tweet as vectors, there are a few RAG/nearest-neighbor shenanigans they could do to generate recs cheaply. (e.g. clustering)

Not as clickbaity as "an LLMs will generate recommendations" but technically the same thing
Musk: X will delete all heuristics from its recommendation system within six weeks. ...the familiar logic of likes, replies, and reposts that shaped Twitter for years is about to disappear. In its place, Grok, the platform's in-house Al model, will read and watch more than one hundred million posts.
Brace for another wave of x refuges
October 26, 2025 at 5:38 PM
Reposted by Max Woolf
Really the most Meta-like tactic on display from OAI lately is to release products that have obvious harms and say "oops!" two days later and add some minor safeguard
With an influx of Meta alums, some OpenAI staffers worry it is adopting Meta's tactics, like using social media dynamics with Sora and a softening stance on ads (The Information)

Main Link | Techmeme Permalink
October 24, 2025 at 7:53 PM
I'm annoyed that after a decade of "don't feed the trolls as feeding them demonstrably gives them influence" espoused by the more experienced internet users, they just can't resist reposting and dunking on AI trolls in an attempt to own them.
October 24, 2025 at 5:13 PM
Watching Law and Order SVU where the detectives ask the head of an AI image generation company for user data and the head replies:

"Sorry, we purge identifying information from our system once an image is generated. AI uses up a lot of data, so we have to optimize our storage space."

lolwut
October 22, 2025 at 3:32 AM