David Crafti
David Crafti
@dcrafti.bsky.social
Disillusioned, but trying my best to improve what can be improved, and slow bad inevitables.
Not an expert, but I play one in every conversation.
Privacy engineer at Google, but my voice is not theirs.
What I write is either my opinion or not my opinion.
As BTC use expands, people will have/need very little actual bitcoin. The libertarian vision of it is dead.
They'll store it in accounts and fractional reserve banking will take over, so the volume of bitcoin that will appear to exist will be ~100x higher than the 21M "real" BTC that can be minted.
November 28, 2025 at 10:27 PM
When asked in ~2016, I said order-of-magnitude-wise, I thought Bitcoin would get to $100k, but unlikely to $1M on normal timelines.
I still don't think the total value of bitcoin will be 4-8x the total value of actual USD (tinyurl.com/frb-usd) or 2.5x the world's cash (tinyurl.com/world-cash).
Federal Reserve Board - Money Stock Measures - H.6 - November 25, 2025
The Federal Reserve Board of Governors in Washington DC.
www.federalreserve.gov
November 28, 2025 at 10:27 PM
Gemini explained it to me: tl;dr:
AES pays its for-profit, parent company to do the investment, and AES could pay less.
AES's returns are near the median, and would be improved by saving up to 0.5% in fees by seeking better rates.
Fiona Reynold's email was artful obfuscation.
#notEthical #irony
November 28, 2025 at 9:56 PM
I posted in more detail in another thread, so tl;dr is those distinctions are all solvable.
There are already multimodal models that can process video. There are already robots that can move around. Things like pain and pleasure can be replicated with unignorable signals. Thinking can be looped.
October 31, 2025 at 4:49 PM
That's the same as humans, right?

From what I understand, each time we remember something, it's a lossy load/save operation.

We're not operating databases with distinct fact storage.

People can have memories manufactured by mere suggestion, in replicated studies.

It's all about the guardrails.
October 31, 2025 at 4:35 PM
I didn't intend to be patronizing. I've opened the paper to read.

For all the comments defaulting to "man something woman", I didn't even notice the gender of the person posting until it was so frequently pointed out. Not that anyone on social media would believe it, but whatever.
October 31, 2025 at 4:30 PM
Re meta-cognitive abilities, it depends what you include, but there's already been a lot of progress in that space.

Any commercial LLM you interact with is already composes multiple LLMs, including specialists to consider inputs and filter outputs, summarise conversations for memory, etc.
October 31, 2025 at 4:26 PM
I haven't been suggesting that LLMs are the entirety of the solution.
An AGI wouldn't need to actually know the philosophical difference between true and false; it would just need to reliably differentiate them in practice. Same as humans, and we still get things wrong, and communicate ambiguously.
October 31, 2025 at 4:19 PM
I think (without being confident), that progress in spiking neural networks could be the breakthrough that allows AI processing to become efficient enough to reduce the size and energy use of AI from the current data centre level.
brainchip.com has a processor that's very interesting in this space.
Home
Unlock the power of AI with BrainChip. Enhance data processing, Edge apps and neural networks at the speed of tomorrow. Explore now!
brainchip.com
October 31, 2025 at 4:14 PM
That said, and while I've said I could be wrong, I really do think we've got all the parts we need, which just need to be (sorry to say this) plumbed together, to make something that can pass for AGI, whether or not it contains sentience. I'm not saying the plumbing will be straightforward though.
October 31, 2025 at 4:10 PM
I mined Bitcoin in 2011, and really liked the ideas and philosophy, but eventually (unfortunately) got rid of my various crypto holdings (still making an ok amount) starting from when I saw my first Bitcoin Facebook group, and realised the meme-based idiocy that was at the heart of the market.
October 31, 2025 at 4:07 PM
Thanks for your replies. I opened the video to watch later. I read that Anthropic paper on Oct 9. It was similar to a concept I wrote about at work that I called "LLM Namshubs", which didn't catch on. I have thoughts about solutions.

I was avoiding BlueSky when I saw the number of notifications.
October 31, 2025 at 4:01 PM
Thank you for a reasonable, nuanced take.
I agree with your observation of where things are at.
I don't deny that there could be a plateau. I kind of hope there will be one.
That said, Chat-GPT only came out 4 years ago, and coding LLMs are newer and already good enough to found companies around.
October 17, 2025 at 7:49 AM
Turns out, after your last post mocking me, because you decided to not be a nice person*, you can go say that to someone else.

*I'm being charitable in not presuming that you're usually a dick.
October 17, 2025 at 7:42 AM
Dude, get a life.
October 17, 2025 at 7:35 AM
So, you're saying computing power has stagnated?

I feel like you're trying to imply something, to avoid actually stating a point.
October 17, 2025 at 7:34 AM
Patronized you? I see it more as not immediately accepting the word of a stranger on the internet.
Sorry if I communicated in a way that caused you to take it as patronizing.

I feel like playing Uno reverse though, calling what I said nonsense.
October 17, 2025 at 7:31 AM
And what's happened to the transistor count and thread count?
Clock speed has stagnated, correct. Are you saying that is decisive about something?
October 17, 2025 at 7:27 AM
Architectures can change.
If we are able to achieve real sentience in just a few litres of meat, I don't buy the idea that there's no way to achieve a similar effect in the volume of a data centre.
It seems like religious thinking.
Maybe some people are afraid we'll discover we're just like LLMs.
October 17, 2025 at 7:25 AM
Nothing in the real world is infinitely exponential.
I'd say the world is full of people standing on (effectively) exponential curves, saying "looks linear to me".
October 17, 2025 at 7:20 AM
I've seen the improvements over the last few years.
Everyone I work with has seen the progress. We all rely on GenAI more and more, both as improved tooling, and to handle research and busywork, with oversight and verification.
It's still early days, but I can already see my own obsolescence.
October 17, 2025 at 7:17 AM
Longer context windows.
Lower energy per token.
RAG.
Chain-of-thought
Summarisation for long-term memory
Multi-modality.
Modular LLMs serving specialised purposes working as parts of a whole.
Experimental models like spiking neural nets are pretty interesting on the efficiency front.
October 17, 2025 at 7:17 AM