Jason Tangen
banner
tangenjm.bsky.social
Jason Tangen
@tangenjm.bsky.social
Professor of Cognitive Science at The University of Queensland.
This pretty much nailed it.
chatgpt.com/share/67a536...
ChatGPT - Strongest Psychological Theories
Shared by Jason Tangen via ChatGPT
chatgpt.com
February 6, 2025 at 10:26 PM
30 years later and I still have nightmares about forgetting hundreds of loaves of bread in the oven from my bakery days. Guess some work stress never lets go. 😅
December 19, 2024 at 7:47 AM
Good point about the distinction. Though this year’s Nobel Prizes show an interesting pattern: Hinton’s theoretical insights + DeepMind’s compute power = AlphaFold solving a 50-year protein folding problem. The biggest breakthroughs seem to need both deep thinking and computational scale.
December 7, 2024 at 8:13 PM
Hmm… Deep Blue’s brute force beat Kasparov’s creativity. AlphaGo’s compute found moves no human imagined. When the gap is 12.4M GPUs vs a laptop, we’re not even playing the same sport anymore. I’d love to be wrong though!
December 6, 2024 at 8:51 AM
December 3, 2024 at 2:12 AM
Quick thought experiment: Pick your field’s most influential paper from the 90s. Now imagine a new PhD student using AI to review the literature. Would they find it? More importantly—would they find the data that made it influential? Science isn’t just papers anymore.
December 2, 2024 at 4:34 AM
Just started writing about AI, cognition, and how universities are changing. This is my first post, with lots more coming: everyirrelevance.substack.com
Every irrelevance. | Jason Tangen | Substack
Making sense of minds, machines, and the beautiful mess where they meet. Click to read "Every irrelevance.", by Jason Tangen, a Substack publication. Launched 5 hours ago.
everyirrelevance.substack.com
November 28, 2024 at 3:27 AM
Don’t think so. When they restricted LLMs to just the results section (where simple outcome averaging would happen), performance dropped substantially. The models are integrating information from methods and background to make their predictions.
November 27, 2024 at 11:44 PM
Full paper here for those who want to dig deeper into the methods and implications: https://buff.ly/4991xfq (Also contains a great discussion of how the models actually achieve this - it’s not what you might expect!)
Large language models surpass human experts in predicting neuroscience results - Nature Human Behaviour
Large language models (LLMs) can synthesize vast amounts of information. Luo et al. show that LLMs—especially BrainGPT, an LLM the authors tuned on the neuroscience literature—outperform experts in…
www.nature.com
November 27, 2024 at 10:33 PM
This isn’t about memory - the LLMs hadn’t seen these results before. It’s about discovering the deep structure of how neuroscience works. A glimpse of AI that doesn’t just retrieve knowledge but helps generate scientific predictions. The future is starting to look different.
November 27, 2024 at 10:33 PM
The fascinating part: LLMs succeed by integrating patterns across the scientific literature - something humans physically can’t do with millions of papers. When restricted to local context only, their performance plummets. They’re building genuine scientific intuition.
November 27, 2024 at 10:33 PM