Habitual side project starter 🪴
X @mattblake_uk
#AICoaching #FutureOfWork #HumanConnection
#AICoaching #FutureOfWork #HumanConnection
AI may not be ready to replace deep therapeutic relationships yet.
But the question isn't whether it will happen. It's how quickly we adapt to working alongside these tools rather than competing with them.
AI may not be ready to replace deep therapeutic relationships yet.
But the question isn't whether it will happen. It's how quickly we adapt to working alongside these tools rather than competing with them.
AI handling structured, goal-focused support that follows established frameworks.
Humans managing the complex, adaptive work requiring cultural sensitivity and emotional nuance.
AI handling structured, goal-focused support that follows established frameworks.
Humans managing the complex, adaptive work requiring cultural sensitivity and emotional nuance.
But here's where it gets interesting.
AI only worked for narrow targets. It couldn't improve broader measures like resilience or overall wellbeing, which human coaches influenced significantly.
But here's where it gets interesting.
AI only worked for narrow targets. It couldn't improve broader measures like resilience or overall wellbeing, which human coaches influenced significantly.
Maybe it's teaching AI to catch itself in the act, just like we do.
#ArtificialIntelligence #CognitiveScience #MachineLearning
Maybe it's teaching AI to catch itself in the act, just like we do.
#ArtificialIntelligence #CognitiveScience #MachineLearning
But here's the uncomfortable question: if AI is developing the same cognitive patterns that make us human, how long will our advantage of self-awareness last?
But here's the uncomfortable question: if AI is developing the same cognitive patterns that make us human, how long will our advantage of self-awareness last?
Both humans and AI construct rather than retrieve information.
The difference isn't that we're more intelligent.
Both humans and AI construct rather than retrieve information.
The difference isn't that we're more intelligent.
He was confabulating plausible memories based on what seemed right to him.
That's exactly what AI does when it "hallucinates."
We don't store memories like computer files either.
He was confabulating plausible memories based on what seemed right to him.
That's exactly what AI does when it "hallucinates."
We don't store memories like computer files either.
He was genuinely trying to tell the truth, but got massive amounts wrong - meetings that never happened, wrong people saying wrong things.
He was genuinely trying to tell the truth, but got massive amounts wrong - meetings that never happened, wrong people saying wrong things.
Sound familiar?
Sound familiar?