Jon Ayre
banner
jonayre.uk
Jon Ayre
@jonayre.uk
I've been an engineer, software developer, architect, director & CTO. Now I'm a hands-on consultant doing business and tech strategy. Creator of the business evolution map. Father & Husband. It WILL go wrong & it CAN be fixed. He/him.
The text talks of using AI to discover "causal links" in historical content.

The problem with that is that LLMs don't discover causal links; they find semantic links. You might think this is the same thing but it's not.

Connecting things semantically is how you become convincingly wrong.
I imagine this will be delightfully controversial, but ‘to caulk gaps in incomplete scholarship and large datasets’. Is that just a fancy way of saying ‘make stuff up’?
November 15, 2025 at 8:48 AM
A better analogy would be "AI is to intelligence what a photograph of a person is to the actual person"

In other words, LLMs appear intelligent because they reproduce the output already created by intelligent humans.

They are little more than a snapshot.
In @nytopinion.nytimes.com

“A.I. is no less a form of intelligence than digital photography is a form of photography,” the philosopher Barbara Gail Montero writes in a guest essay. “And now A.I. is on its way to doing something even more remarkable: becoming conscious.”
Opinion | A.I. Is Already Intelligent. This Is How It Becomes Conscious.
Skeptics overlook how our concepts change.
nyti.ms
November 10, 2025 at 7:38 AM
For generations people have been persuaded that the ability to regurgitate information is intelligence.

No surprise then that they're so easily fooled into thinking an LLM exhibits intelligence.

IMO, intelligence requires the ability to use knowledge to solve novel problems. LLMs can't do this.
November 10, 2025 at 7:13 AM
Reposted by Jon Ayre
The article doesn't say it, but this cool solar forecasting software uses no large language models, the thing people usually think of when they hear "AI." It uses a fairly simple convolutional neural network that's readily trained on a laptop. No data centers or LLMs involved. Research paper here:
November 9, 2025 at 12:29 PM
AI will, of course, survive and even thrive when the bubble bursts, just like the world of dot com did.

But there will be economic fallout for years.
November 6, 2025 at 6:27 AM
Businessess hoping to use AI to automate job roles and cut cost by removing people are going to get badly burnt.

When all the costs are added up, AI automation will prove (as all automation does) to cost as much as the people who did the jobs before.
November 4, 2025 at 8:58 AM
People claim LLMs are like human brains & their output should be seen as a sign they think & reason

This position can only be reached if your ignorance of how LLMs work matches your ignorance of how the brain works

LLMs are single shot processors with no mechanisms for inference or awareness
Understanding neural networks – Part 2: Hidden neurons, troublesome feedback - Jon Ayre
There are many different ways you can connect artificial neurons together to create a working neural network. In part 2 of this guide I’ll explain the more common approaches and discuss their strength...
jonayre.uk
November 3, 2025 at 12:08 PM
Your regular reminder that "valued at" and "worth" mean different things.
November 3, 2025 at 7:31 AM
Is this a US thing? Having worked with tech people for several decades in the UK, I haven't found them to be any less capable in non-STEM areas than others in the workforce.

And in many companies, the issue with the people making the bad tech decisions is that they know very little about tech.
Ironically that post reveals the other meaning of the Torment Nexus, which is that STEM only education left a generation of tech guys with no ability to interpret text
November 2, 2025 at 8:46 AM
We're deep into the AI adoption conversation now, and there are as many opinions as there are people.

So how about a look back on some thoughts from 2020 to see if I was barking up the right tree:
The Business Evolution Map, Part 4
Industry is struggling to cope with the sudden absence of people from essential processes, and automation is forefront in their minds.
www.equalexperts.com
November 1, 2025 at 6:30 PM
People often hate agile because what they've been told is agile and made to do is definitely not agile and absolutely something to hate

However, there are people who hate true agile and I believe they hate it because test and learn repeatedly and quickly shows their idea was wrong
November 1, 2025 at 1:31 PM
Them: You live in an age where AI knows everything about you, can predict your inner thoughts and anticipate your needs!

Also them:
November 1, 2025 at 12:51 PM
This has become a regular business practice and AI hype has now provided an accepted argument for why it can be done without harming the long term viability of the company.

Regardless of AIs utility as an automation technology, cuts of this nature are the start or continuation of a downward spiral
UPS just said it cut 48,000 jobs this year.
Here’s why that matters:

💰 The company is making record profits;
📈 Its stock rose after the layoffs.

When a company calls that “savings,” it’s not efficiency — it’s putting profits way over people.
October 29, 2025 at 7:06 AM
Sometimes you're looking for advice, but other times you're just looking for people to say "Ah yes. That's crap isn't it?"

This place is not for the latter, as the replies demonstrate 😂
I have this Lenovo tablet that shuts off when it hits 20% battery life. If 20% is the point where it no longer functions, shouldn't the 20% point be 0%? I think it would be easier for a computer to remember to subract 0.2 and multiply by 1.25 than me, a squishy thing made of hydrocarbons and blood
October 29, 2025 at 7:03 AM
Organisational stagnation - the process whereby a large successful enterprise slowly decays from the inside whilst still appearing to function on the outside until it suddenly collapses.

How long does it take?
Is it inevitable?
Can it be measured?
Is it reversible?

I feel a blog post is in order.
October 28, 2025 at 8:10 AM
This article is a great example of why so much talk about LLMs is ridiculed by those who understand the technology.

IMO, the people making these claims are doing so with zero understanding of how LLMs work, or they're being deliberately misleading.

Either way, it's embarrassing.
AI models may be developing their own ‘survival drive’, researchers say
Like 2001: A Space Odyssey’s HAL 9000, some AIs seem to resist being turned off and will even sabotage shutdown
www.theguardian.com
October 28, 2025 at 7:55 AM
Successful strategy relies on facts, not assumptions.

Experiments turn assumptions into facts... Always.

They might not be the facts you expected but they are the facts you need.

Test and learn, people.
Test and learn.
October 27, 2025 at 8:49 AM
Reposted by Jon Ayre
I stand with Jon!
October 25, 2025 at 6:50 PM
Politicians today just seem to want to divide society and set us against each other

What we need is a new kind of politics that unites us around something on which we can all agree.

That's why, on Monday I'll be launching the "Cats are obviously better than dogs" party.

Onwards together!
October 25, 2025 at 4:16 PM
This is exhausting.

The LLMs are trained on human writings, hence the sentences they generate echo the reasoning that people share in the same contexts

It's no surprise, therefore, that the resulting text appears to convey inference and understanding

No more surprising than an audio book doing so
Are Large Language Models Sensitive to the Motives Behind Communication? (Basically yes, although reasoning models underperform, and all of them struggle with online ads in a naturalistic setting.)
arxiv.org/abs/2510.19687
October 25, 2025 at 7:50 AM
Happy "why the hell do we still do this hour forward hour back thing" week to all who celebrate 🥳
October 24, 2025 at 9:29 AM
Which works pretty damn well most of the time. One outage does not an issue make, especially when you compare it to the self-run on prem data centres that came before.
The AWS outage today is a good reminder that there is no "cloud", there's just somebody else's computer.
October 22, 2025 at 7:09 AM
I'm not a fan of excessive centralisation, but if the AWS outage teaches us anything it's that the current hosting reliability situation is orders of magnitude better than it was before these services existed.
October 21, 2025 at 7:13 PM
Who's solving the Louvre robbery? Right answers only.

It's true that there can only be one...

...this one
October 21, 2025 at 6:52 AM
Don't tell me you hate LLMs because they burn too much energy then tell me you write all your code in Python.

That's 30 times the energy you need to burn for no real benefit - even if "it's just one program bro"
October 20, 2025 at 7:26 PM