Jeffrey Bowers
banner
Jeffrey Bowers
@jeffreybowers.bsky.social
They seem to have point: "Around a fifth (21%) do not feel free to discuss challenging or controversial topics in their teaching. This rises to a third (34%) of academics from ethnic minority backgrounds". My colleague David Miller at Bristol was fired despite court dismissing antisemitism charges.
October 27, 2025 at 10:38 AM
Think that would be a good point to make in paper. Many people will not know of the alternative approach.
October 26, 2025 at 4:33 PM
Really interesting work! Hope to go into the details in our lab group. Just one initial comment – Grossberg has addressed many of these effects in his models. Would be interesting to relate your work to his. e.g.:
link.springer.com/content/pdf/...
sites.bu.edu/steveg/files...
link.springer.com
October 26, 2025 at 12:20 PM
There is an alternative press that is better than back then by far. Check it out. And get more people to move on.
October 26, 2025 at 10:21 AM
Although we disagree on some points, @rtommccoy.bsky.social has been really helpful in responding to my comments.
September 30, 2025 at 8:37 PM
September 30, 2025 at 8:37 PM
Curious what people think about what LMs are telling us about innate priors, and human language more generally.
September 30, 2025 at 8:37 PM
The failure of LMs with language-priors does not undermine the importance of innate domain-specific priors for humans. But it does suggest that LMs are not going to be a good model of human language anytime soon.
September 30, 2025 at 8:37 PM
Although this would appear to lend support to the importance of language-priors, their findings do not support this conclusion either. Most basically, their prior-trained LMs still need orders more magnitude of data to learn.
September 30, 2025 at 8:37 PM
McCoy and Griffiths (2025) argue that distilling LMs with a language prior through meta-learning allows LMs to learn natural languages (English) quickly and flexibly: www.nature.com/articles/s41....
Modeling rapid language learning by distilling Bayesian priors into artificial neural networks - Nature Communications
Children can learn language from very little experience, and explaining this ability has been a major challenge in cognitive science. Here, the authors combine the flexible representations of neural networks with the structured learning biases of Bayesian models to help explain rapid language learning.
www.nature.com
September 30, 2025 at 8:37 PM
In Bowers (in press; Psychological Review) I challenged the claim that LMs can learn a language when trained on human-scale diet of words; consistent with the need for language-specific priors: osf.io/preprints/ps...
OSF
osf.io
September 30, 2025 at 8:37 PM
Reposted by Jeffrey Bowers
WhatsApp wouldn’t let me write the word genocide in a text this week. It kept changing it to ‘Genie I’ (?) I slowed right down and realised the word isn’t in its dictionary. I found that weird.
September 19, 2025 at 6:40 AM
The papers you started with are not inconsistent with grandmother cells, for reasons detailed in many of the papers I've linked above. But please feel free to send papers you think are inconsistent. Again, my claim is simply the current evidence does not rule them out. bsky.app/profile/tyre...
September 18, 2025 at 11:45 AM
Reposted by Jeffrey Bowers
You assert the data support distributed processing. I show through simulations that the data are consistent with grandmother cells (and that we should not rule out localist models based on current data). I guess we can leave it there.
September 17, 2025 at 7:16 PM
You assert the data support distributed processing. I show through simulations that the data are consistent with grandmother cells (and that we should not rule out localist models based on current data). I guess we can leave it there.
September 17, 2025 at 7:16 PM