The AI intelligence curve won't slow down. The responsibility falls on us to ensure that human cognitive capabilities don't deteriorate as our technologies advance. The greatest risk isn't only superintelligent AI—it's cognitively diminished humans.
The AI intelligence curve won't slow down. The responsibility falls on us to ensure that human cognitive capabilities don't deteriorate as our technologies advance. The greatest risk isn't only superintelligent AI—it's cognitively diminished humans.
The encouraging news? The human brain remains remarkably adaptable. The same neuroplasticity that responds negatively to digital distraction can respond positively to deliberate cultivation of deeper thinking.
The encouraging news? The human brain remains remarkably adaptable. The same neuroplasticity that responds negatively to digital distraction can respond positively to deliberate cultivation of deeper thinking.
This demands practical responses: redesigning digital environments to support deep thinking rather than distraction, prioritizing active engagement over passive consumption, and ensuring our educational systems cultivate the reasoning abilities that remain distinctly human.
This demands practical responses: redesigning digital environments to support deep thinking rather than distraction, prioritizing active engagement over passive consumption, and ensuring our educational systems cultivate the reasoning abilities that remain distinctly human.
The question isn't only whether machines will "take over", although that is also a risk, exemplified by the-coming-wave.com —it's whether we're inadvertently diminishing our own cognitive strengths at precisely the moment we need them most to guide these increasingly powerful systems.
The question isn't only whether machines will "take over", although that is also a risk, exemplified by the-coming-wave.com —it's whether we're inadvertently diminishing our own cognitive strengths at precisely the moment we need them most to guide these increasingly powerful systems.
But the divergence is clear: as we design AI systems that match and exceed human cognitive capabilities, we simultaneously create digital environments that may undermine those same capabilities in ourselves.
But the divergence is clear: as we design AI systems that match and exceed human cognitive capabilities, we simultaneously create digital environments that may undermine those same capabilities in ourselves.
This isn't universal doom. Many individuals maintain robust cognitive abilities, and the research shows our underlying capacities remain intact. People can be retrained to apply their intelligence more effectively.
This isn't universal doom. Many individuals maintain robust cognitive abilities, and the research shows our underlying capacities remain intact. People can be retrained to apply their intelligence more effectively.
Research consistently finds that passive digital consumption and frequent interruptions impair our processing of verbal information, working memory, and self-regulation. The brain adapts to what we practice—and we're practicing distraction.
Research consistently finds that passive digital consumption and frequent interruptions impair our processing of verbal information, working memory, and self-regulation. The brain adapts to what we practice—and we're practicing distraction.
Our attention is increasingly fractured by notifications and interruptions, each one pulling us from deep focus and eroding our capacity for sustained cognitive effort.
Our attention is increasingly fractured by notifications and interruptions, each one pulling us from deep focus and eroding our capacity for sustained cognitive effort.
We've shifted from longer articles that demand synthesis and reflection to bite-sized, pre-packaged posts that require no critical evaluation or mental integration.
We've shifted from longer articles that demand synthesis and reflection to bite-sized, pre-packaged posts that require no critical evaluation or mental integration.
We've moved from the social graph (selective content from people we know and actively engage with) to algorithmic feeds (endless streams of hyper-engaging content requiring minimal participation).
We've moved from the social graph (selective content from people we know and actively engage with) to algorithmic feeds (endless streams of hyper-engaging content requiring minimal participation).
What's driving these human cognitive challenges? As
@jburnmurdoch
points out, our digital environment has fundamentally transformed how we engage with information:
What's driving these human cognitive challenges? As
@jburnmurdoch
points out, our digital environment has fundamentally transformed how we engage with information:
Meanwhile, human intelligence appears under pressure. Recent international assessments show declining literacy, reasoning abilities, and information processing capacity across multiple countries and knowledge domains.
Meanwhile, human intelligence appears under pressure. Recent international assessments show declining literacy, reasoning abilities, and information processing capacity across multiple countries and knowledge domains.
This isn't just incremental progress. In less than 2 years, top AI models have advanced from "below average" intelligence (50-70 IQ) to "gifted" levels (130+). The slope of improvement shows no signs of flattening.
This isn't just incremental progress. In less than 2 years, top AI models have advanced from "below average" intelligence (50-70 IQ) to "gifted" levels (130+). The slope of improvement shows no signs of flattening.
AI model IQ scores from 2023-2025 shows a striking upward trajectory. The newest models (Gemini 2.5 Pro, OpenAI o1, Claude 3.7) are scoring 120-130 on standardized IQ assessments, well above the human average of 100 and entering "gifted" territory. Data from trackingai.org/home
AI model IQ scores from 2023-2025 shows a striking upward trajectory. The newest models (Gemini 2.5 Pro, OpenAI o1, Claude 3.7) are scoring 120-130 on standardized IQ assessments, well above the human average of 100 and entering "gifted" territory. Data from trackingai.org/home
I have just sold my Tesla and stopped using twitter/x a while back.
I genuinely hope the US people wake up soon and realize that their elected Government is heading in a very wrong direction with long-lasting consequences
2/2
I have just sold my Tesla and stopped using twitter/x a while back.
I genuinely hope the US people wake up soon and realize that their elected Government is heading in a very wrong direction with long-lasting consequences
2/2
🥇ChatGPT o1 Pro: IQ = 81 (13 correct)
🥈ChatGPT o1: IQ = 78 (12 correct)
🥉Gemini 2.0 Thinking: IQ = 70 (10 correct)
4️⃣Claude 3.5 Sonnet: IQ = 67 (9 correct)
5️⃣Gemini1206: IQ = 63 (8 correct)
3/4
@emollick.bsky.social @serge.belongie.com @heikohotz.bsky.social @sbubeck.bsky.social
🥇ChatGPT o1 Pro: IQ = 81 (13 correct)
🥈ChatGPT o1: IQ = 78 (12 correct)
🥉Gemini 2.0 Thinking: IQ = 70 (10 correct)
4️⃣Claude 3.5 Sonnet: IQ = 67 (9 correct)
5️⃣Gemini1206: IQ = 63 (8 correct)
3/4
@emollick.bsky.social @serge.belongie.com @heikohotz.bsky.social @sbubeck.bsky.social
The LLMs are excellent in many language, Math, and coding tests, but still struggle with the typical visual puzzle tests
I fitted a curve to the data to predict an IQ for the models, as they were <85 cutoff
2/4
The LLMs are excellent in many language, Math, and coding tests, but still struggle with the typical visual puzzle tests
I fitted a curve to the data to predict an IQ for the models, as they were <85 cutoff
2/4