Not an expert, but I play one in every conversation.
Privacy engineer at Google, but my voice is not theirs.
What I write is either my opinion or not my opinion.
They'll store it in accounts and fractional reserve banking will take over, so the volume of bitcoin that will appear to exist will be ~100x higher than the 21M "real" BTC that can be minted.
They'll store it in accounts and fractional reserve banking will take over, so the volume of bitcoin that will appear to exist will be ~100x higher than the 21M "real" BTC that can be minted.
I still don't think the total value of bitcoin will be 4-8x the total value of actual USD (tinyurl.com/frb-usd) or 2.5x the world's cash (tinyurl.com/world-cash).
I still don't think the total value of bitcoin will be 4-8x the total value of actual USD (tinyurl.com/frb-usd) or 2.5x the world's cash (tinyurl.com/world-cash).
AES pays its for-profit, parent company to do the investment, and AES could pay less.
AES's returns are near the median, and would be improved by saving up to 0.5% in fees by seeking better rates.
Fiona Reynold's email was artful obfuscation.
#notEthical #irony
AES pays its for-profit, parent company to do the investment, and AES could pay less.
AES's returns are near the median, and would be improved by saving up to 0.5% in fees by seeking better rates.
Fiona Reynold's email was artful obfuscation.
#notEthical #irony
There are already multimodal models that can process video. There are already robots that can move around. Things like pain and pleasure can be replicated with unignorable signals. Thinking can be looped.
There are already multimodal models that can process video. There are already robots that can move around. Things like pain and pleasure can be replicated with unignorable signals. Thinking can be looped.
From what I understand, each time we remember something, it's a lossy load/save operation.
We're not operating databases with distinct fact storage.
People can have memories manufactured by mere suggestion, in replicated studies.
It's all about the guardrails.
From what I understand, each time we remember something, it's a lossy load/save operation.
We're not operating databases with distinct fact storage.
People can have memories manufactured by mere suggestion, in replicated studies.
It's all about the guardrails.
For all the comments defaulting to "man something woman", I didn't even notice the gender of the person posting until it was so frequently pointed out. Not that anyone on social media would believe it, but whatever.
For all the comments defaulting to "man something woman", I didn't even notice the gender of the person posting until it was so frequently pointed out. Not that anyone on social media would believe it, but whatever.
Any commercial LLM you interact with is already composes multiple LLMs, including specialists to consider inputs and filter outputs, summarise conversations for memory, etc.
Any commercial LLM you interact with is already composes multiple LLMs, including specialists to consider inputs and filter outputs, summarise conversations for memory, etc.
An AGI wouldn't need to actually know the philosophical difference between true and false; it would just need to reliably differentiate them in practice. Same as humans, and we still get things wrong, and communicate ambiguously.
An AGI wouldn't need to actually know the philosophical difference between true and false; it would just need to reliably differentiate them in practice. Same as humans, and we still get things wrong, and communicate ambiguously.
brainchip.com has a processor that's very interesting in this space.
brainchip.com has a processor that's very interesting in this space.
I was avoiding BlueSky when I saw the number of notifications.
I was avoiding BlueSky when I saw the number of notifications.
I agree with your observation of where things are at.
I don't deny that there could be a plateau. I kind of hope there will be one.
That said, Chat-GPT only came out 4 years ago, and coding LLMs are newer and already good enough to found companies around.
I agree with your observation of where things are at.
I don't deny that there could be a plateau. I kind of hope there will be one.
That said, Chat-GPT only came out 4 years ago, and coding LLMs are newer and already good enough to found companies around.
*I'm being charitable in not presuming that you're usually a dick.
*I'm being charitable in not presuming that you're usually a dick.
I feel like you're trying to imply something, to avoid actually stating a point.
I feel like you're trying to imply something, to avoid actually stating a point.
Sorry if I communicated in a way that caused you to take it as patronizing.
I feel like playing Uno reverse though, calling what I said nonsense.
Sorry if I communicated in a way that caused you to take it as patronizing.
I feel like playing Uno reverse though, calling what I said nonsense.
Clock speed has stagnated, correct. Are you saying that is decisive about something?
Clock speed has stagnated, correct. Are you saying that is decisive about something?
If we are able to achieve real sentience in just a few litres of meat, I don't buy the idea that there's no way to achieve a similar effect in the volume of a data centre.
It seems like religious thinking.
Maybe some people are afraid we'll discover we're just like LLMs.
If we are able to achieve real sentience in just a few litres of meat, I don't buy the idea that there's no way to achieve a similar effect in the volume of a data centre.
It seems like religious thinking.
Maybe some people are afraid we'll discover we're just like LLMs.
I'd say the world is full of people standing on (effectively) exponential curves, saying "looks linear to me".
I'd say the world is full of people standing on (effectively) exponential curves, saying "looks linear to me".
Everyone I work with has seen the progress. We all rely on GenAI more and more, both as improved tooling, and to handle research and busywork, with oversight and verification.
It's still early days, but I can already see my own obsolescence.
Everyone I work with has seen the progress. We all rely on GenAI more and more, both as improved tooling, and to handle research and busywork, with oversight and verification.
It's still early days, but I can already see my own obsolescence.
Lower energy per token.
RAG.
Chain-of-thought
Summarisation for long-term memory
Multi-modality.
Modular LLMs serving specialised purposes working as parts of a whole.
Experimental models like spiking neural nets are pretty interesting on the efficiency front.
Lower energy per token.
RAG.
Chain-of-thought
Summarisation for long-term memory
Multi-modality.
Modular LLMs serving specialised purposes working as parts of a whole.
Experimental models like spiking neural nets are pretty interesting on the efficiency front.