Blueprint
twitteraddict.bsky.social
Blueprint
@twitteraddict.bsky.social
oh, youvve read a few academic papers on the matter? cute. i have read over 100000 posts.

Liberal, progressive, whatever... But right now what you're for matters a whole lot less than what you're against.
My sister knows what it is and she's extremely insistent that it stays on
November 27, 2025 at 11:32 AM
Z-image is pretty close to as good, and runs on any old gaming PC
November 27, 2025 at 11:04 AM
Reposted by Blueprint
November 26, 2025 at 11:12 PM
White *AND not liberal
November 26, 2025 at 11:32 PM
THE SPICE MUST FLOW
November 26, 2025 at 9:15 PM
The highest quality SOTA models are way more expensive to run than mid tier or older models, although any given model gets cheaper to run over time. It's a mix of that and some users just using it an insane amount, probably as a full time assistant for a coding or writing job
November 26, 2025 at 8:46 PM
The cost of actually providing access to models has gotten way cheaper over the last few years, and will continue to get cheaper. You can already run lower end models directly on a laptop or mobile phone, and hardware that can run higher end models is getting pretty affordable. AI isn't going away.
November 26, 2025 at 8:43 PM
So here's the thing though: the big AI's are losing tons of money, but that's mostly because they're spending a ton to train upcoming models. If they just stopped doing that, they'd theoretically be profitable (at least until you take things like pre-existing loans into consideration)
November 26, 2025 at 8:38 PM
You can run low VRAM LLM's on any decent laptop, but they will be dumber than the state of the art models, and will also take longer to write. But there is specialty consumer hardware coming out that lets you run much better models with higher VRAM usage, and it's expensive but not unaffordable
November 26, 2025 at 8:21 PM
The speed will also generally be lower than using CHATGPT, but not to an insane degree. And that's with current hardware... In a few years, it'll be trivial to run all but the best LLM's with anything resembling a gaming PC. And if the AI bubble crashed, it would make consumer hardware cheaper
November 26, 2025 at 8:15 PM
So with consumer hardware, VRAM is the big limitation in running the best AI models. The low-VRAM models are noticably worse than state of the art models... But there's hardware coming out that's specifically made to run high-VRAM models at home. The gap between SOTA and home models keeps shrinking
November 26, 2025 at 8:12 PM