The problem with that is that LLMs don't discover causal links; they find semantic links. You might think this is the same thing but it's not.
Connecting things semantically is how you become convincingly wrong.
The problem with that is that LLMs don't discover causal links; they find semantic links. You might think this is the same thing but it's not.
Connecting things semantically is how you become convincingly wrong.
In other words, LLMs appear intelligent because they reproduce the output already created by intelligent humans.
They are little more than a snapshot.
“A.I. is no less a form of intelligence than digital photography is a form of photography,” the philosopher Barbara Gail Montero writes in a guest essay. “And now A.I. is on its way to doing something even more remarkable: becoming conscious.”
In other words, LLMs appear intelligent because they reproduce the output already created by intelligent humans.
They are little more than a snapshot.
No surprise then that they're so easily fooled into thinking an LLM exhibits intelligence.
IMO, intelligence requires the ability to use knowledge to solve novel problems. LLMs can't do this.
No surprise then that they're so easily fooled into thinking an LLM exhibits intelligence.
IMO, intelligence requires the ability to use knowledge to solve novel problems. LLMs can't do this.
But there will be economic fallout for years.
But there will be economic fallout for years.
When all the costs are added up, AI automation will prove (as all automation does) to cost as much as the people who did the jobs before.
When all the costs are added up, AI automation will prove (as all automation does) to cost as much as the people who did the jobs before.
This position can only be reached if your ignorance of how LLMs work matches your ignorance of how the brain works
LLMs are single shot processors with no mechanisms for inference or awareness
This position can only be reached if your ignorance of how LLMs work matches your ignorance of how the brain works
LLMs are single shot processors with no mechanisms for inference or awareness
And in many companies, the issue with the people making the bad tech decisions is that they know very little about tech.
And in many companies, the issue with the people making the bad tech decisions is that they know very little about tech.
So how about a look back on some thoughts from 2020 to see if I was barking up the right tree:
So how about a look back on some thoughts from 2020 to see if I was barking up the right tree:
However, there are people who hate true agile and I believe they hate it because test and learn repeatedly and quickly shows their idea was wrong
However, there are people who hate true agile and I believe they hate it because test and learn repeatedly and quickly shows their idea was wrong
Also them:
Also them:
Regardless of AIs utility as an automation technology, cuts of this nature are the start or continuation of a downward spiral
Regardless of AIs utility as an automation technology, cuts of this nature are the start or continuation of a downward spiral
This place is not for the latter, as the replies demonstrate 😂
This place is not for the latter, as the replies demonstrate 😂
How long does it take?
Is it inevitable?
Can it be measured?
Is it reversible?
I feel a blog post is in order.
How long does it take?
Is it inevitable?
Can it be measured?
Is it reversible?
I feel a blog post is in order.
IMO, the people making these claims are doing so with zero understanding of how LLMs work, or they're being deliberately misleading.
Either way, it's embarrassing.
IMO, the people making these claims are doing so with zero understanding of how LLMs work, or they're being deliberately misleading.
Either way, it's embarrassing.
Experiments turn assumptions into facts... Always.
They might not be the facts you expected but they are the facts you need.
Test and learn, people.
Test and learn.
Experiments turn assumptions into facts... Always.
They might not be the facts you expected but they are the facts you need.
Test and learn, people.
Test and learn.
What we need is a new kind of politics that unites us around something on which we can all agree.
That's why, on Monday I'll be launching the "Cats are obviously better than dogs" party.
Onwards together!
What we need is a new kind of politics that unites us around something on which we can all agree.
That's why, on Monday I'll be launching the "Cats are obviously better than dogs" party.
Onwards together!
The LLMs are trained on human writings, hence the sentences they generate echo the reasoning that people share in the same contexts
It's no surprise, therefore, that the resulting text appears to convey inference and understanding
No more surprising than an audio book doing so
arxiv.org/abs/2510.19687
The LLMs are trained on human writings, hence the sentences they generate echo the reasoning that people share in the same contexts
It's no surprise, therefore, that the resulting text appears to convey inference and understanding
No more surprising than an audio book doing so
It's true that there can only be one...
...this one
It's true that there can only be one...
...this one
That's 30 times the energy you need to burn for no real benefit - even if "it's just one program bro"
That's 30 times the energy you need to burn for no real benefit - even if "it's just one program bro"