andrea e. martin
andreaeyleen.bsky.social
andrea e. martin
@andreaeyleen.bsky.social
::language, cognitive science, neural dynamics::
Lise Meitner Group Leader, Max Planck Institute for Psycholinguistics |
Principal Investigator, Donders Centre for Cognitive Neuroimaging, Radboud University |
http://www.andreaemartin.com/
lacns.GitHub.io
Pinned
Ode to the original language model, or:
Give me literally Anything* instead of Large Language Models (LLMs)
*(no predictive coding either!)

By Lady Byronadrea LLMartin 1/n
Reposted by andrea e. martin
Manganese dendrites displayed in Gamagori Natural History Museum.
January 18, 2026 at 3:17 AM
Reposted by andrea e. martin
This is also why agentic LLMs will never live up the hype. Error rates (ie hallucinations, bc those are errors) multiply.

Ex: if you chain two LLMs that individually have 80% success rates, the total success rate is 64%!

Chained 4 times? 41%

😳🫠🤷🏽‍♂️
February 15, 2026 at 8:33 PM
Reposted by andrea e. martin
OpenAI ”acknowledged in its own research that LLMs will always produce hallucinations due to fundamental mathematical constraints that cannot be solved through better engineering, marking a significant admission from one of the AI industry’s leading companies.”

You can’t trust chatbots.
OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws
In a landmark study, OpenAI researchers reveal that large language models will always produce plausible but false outputs, even with perfect data, due to fundamental statistical and computational limi...
www.computerworld.com
February 15, 2026 at 8:25 PM
Reposted by andrea e. martin
OK! I collected much of what I @spookyachu.bsky.social @andreaeyleen.bsky.social (and other collaborators not on here) have said on the Turing test (from critical, gendered, etc. angles) as it keeps being relevant: olivia.science/turing — hope it's useful for others too. Happy Sunday! 🤖💭
February 15, 2026 at 1:19 PM
Reposted by andrea e. martin
How transparent is your research funder? 🧐 In our latest work we introduce the Transparent Reporting Scale (TRS) to evaluate how funders report grant data. It's time for standardized transparency to bridge the "scissor-shaped curve" in neuroscience. www.frontiersin.org/journals/com...
Frontiers | Girls just wanna have funds: a new Transparent Reporting Scale for evaluating grant data reporting from funding agencies
IntroductionDespite the increasing representation of women in scientific fields, disparities in research funding allocation remain. This inequity deprives ta...
www.frontiersin.org
February 13, 2026 at 1:38 PM
Reposted by andrea e. martin
AI is not inevitable. If we had sane people in government that were not in thrall to billionaire tech CEOs, LLMs could be regulated, forced to obey existing copyright laws, and banned from places where their use is inappropriate, such as college classes. This should be a moderate position.
February 12, 2026 at 4:14 PM
Reposted by andrea e. martin
my annual congenial critical repost of this reminding linguists that generic singular they and specific/nonbinary singular they are drastically different ages and we should think about how to talk about this in our lingcomm. i think i had a poem about it a couple years back too but cant find it rn
A timely valentine from Dr Grammar: pronouns have histories.
February 13, 2026 at 4:29 PM
Reposted by andrea e. martin
girls just wanna make puns
What a fantastic initiative and important paper! Congratulations @vborghesani.bsky.social and co-authors

But I am absolutely devastated that @olivia.science and I didn’t think of the brilliant “girls just wanna have funds” first 😂🥹🥺
Proud of being part of www.winrepo.org and of all the work we do! 💙
February 13, 2026 at 3:07 PM
What a fantastic initiative and important paper! Congratulations @vborghesani.bsky.social and co-authors

But I am absolutely devastated that @olivia.science and I didn’t think of the brilliant “girls just wanna have funds” first 😂🥹🥺
Proud of being part of www.winrepo.org and of all the work we do! 💙
February 13, 2026 at 2:23 PM
Reposted by andrea e. martin
I'm really glad more and more people are noticing correlationism, part of the conspiratorial logic of AI...

Guest, O. & Martin, A. E. (2025). A Metatheory of Classical and Modern Connectionism. Psychological Review. doi.org/10.1037/rev0...

PDF: repository.ubn.ru.nl/bitstream/ha...
February 13, 2026 at 8:24 AM
Reposted by andrea e. martin
Phonetics nerds will love this. Henry Higgins has competition
OSINT folks, the bar has been raised
February 12, 2026 at 6:15 AM
Reposted by andrea e. martin
More people should listen to @tressiemcphd.bsky.social

When people say AI is inevitable, let them know the future isn’t settled.
February 11, 2026 at 2:35 PM
Reposted by andrea e. martin
Our paper is out in @natneuro.nature.com!

www.nature.com/articles/s41...

We develop a geometric theory of how neural populations support generalization across many tasks.

@zuckermanbrain.bsky.social
@flatironinstitute.org
@kempnerinstitute.bsky.social

1/14
February 10, 2026 at 3:56 PM
Reposted by andrea e. martin
I have no idea if this account truly is a bot as in the bio or just a cosplayer or wtv, but the history of science very directly and simply shows the exact opposite. The obsession with prediction is recent and is definitively a red herring in the search for understanding. You've been warned. 😌
February 8, 2026 at 9:42 PM
Reposted by andrea e. martin
"prediction is a red herring" is so hard for so many to grasp, especially those trapped in correlationist thinking bsky.app/profile/oliv...
"Just because a model correlates with neural and behavioral data, it is not sufficient for us to infer that the model is performing cognition: correlation does not imply cognition."

On Logical Inference over Brains, Behaviour, and Artificial Neural Networks. doi.org/10.1007/s421...

3/n
On Logical Inference over Brains, Behaviour, and Artificial Neural Networks - Computational Brain & Behavior
In the cognitive, computational, and neuro-sciences, practitioners often reason about what computational models represent or learn, as well as what algorithm is instantiated. The putative goal of such...
doi.org
February 9, 2026 at 8:51 AM
Reposted by andrea e. martin
AI "impedes [theory because we're] interested in human-understandable theory and theory-based models, not statistical models which provide only a representation of the data. Scientific theories and models are only useful if [we understand them and] they connect transparently to research questions."
Not directly relevant but medicalisation is also a strategy, perhaps useful see bsky.app/profile/oliv...
February 8, 2026 at 7:33 AM
Reposted by andrea e. martin
There is no way to fix the core problem, which is that the statistical production of symbols by definition is not based on their meaning
Would be interesting to compare the results on more recent models - but this problem won’t go away. LLMs are always going to be extrapolating from what has already, and often, been thought, which is why they aren’t windows to the future but anchors to the past.
Neat demonstration of how artificial so-called intelligence is taking us backwards.

"ChatGPT produced content most consistent with the 1960s and DALL-E 3 in the late 1980s and early '90s."

#AI - see @shannonvallor.bsky.social's work for important thinking on this
phys.org/news/2026-02...
February 7, 2026 at 2:38 PM
Reposted by andrea e. martin
Our review outlines and evaluates key predictions of active inference across four domains that span the action-perception cycle: action planning, decision-making, motor control and sensorimotor adaptation. For each domain, we contrast active inference with leading alternative accounts. #neuroskyence
January 29, 2026 at 8:29 AM
Reposted by andrea e. martin
According to active inference, brains minimize prediction errors. This bold, new unifying way of thinking about the brain has established it as one of the most discussed frameworks of this century, but it is also criticized for its limited empirical grounding. Our review addresses these concerns.
January 29, 2026 at 8:29 AM
Reposted by andrea e. martin
What is the brain for? Active inference is widely discussed as a unifying framework for understanding brain function, yet its empirical status remains debated. Our review identifies core predictions across the action-perception cycle and evaluates their empirical support: osf.io/preprints/ps...
January 29, 2026 at 8:29 AM
Reposted by andrea e. martin
it is impossible to convey how spiritually and emotionally devastating it is to know that other people do not consider you to be a real person with an interior (or exterior) life beyond them
February 5, 2026 at 5:24 PM
Reposted by andrea e. martin
How many opportunities did I lose because men I encountered or worked with did (or didn't) think of me as someone they could get in bed? What was the point of a reporter willing to cover an active warzone if they were still laid off after bad decisions made by the paper's senior managers and owner?
February 5, 2026 at 5:32 PM
Reposted by andrea e. martin
At the same time, the extent of the Washington Post layoffs highlight how it didn't matter how hardworking, loyal, smart, collaborative, award-winning, competitive, knowledgeable and devoted so many of the staff were. It wasn't enough to prevent being laid off by one of the richest men in the world.
February 5, 2026 at 5:19 PM
Reposted by andrea e. martin
the Epstein files are really devastating because they remind me of how many girls and women miss out on professional opportunities, mentorship and careers because of how many powerful, rich and influential men only view girls and women — and interactions with them — through the lens of sex
February 5, 2026 at 5:05 PM
Reposted by andrea e. martin
"Nested spatiotemporal theta–gamma waves organize hierarchical processing across the mouse visual cortex" www.nature.com/articles/s41...

traveling waves-related, supershiny looking figures, apparently made in Julia: github.com/brendanjohnh... makes me want to learn it 🙂

#VisualizationInspo
February 4, 2026 at 10:31 AM