Eli Fennell, Ph.D.
@elifennellphd.bsky.social
Eli Fennell is an early career Experimental Psychologist, specializing in Dynamical, Computational, Cognitive, and Evolutionary perspectives.
Website: https://elifennell.com/
Website: https://elifennell.com/
Pinned
OSF
osf.io
Action Identification Characteristics and Priming Effects in ChatGPT (Preprint)
Eli Fennell, 2023
Eli Fennell, 2023
An analysis of abstracts of biomedical research published before and after the release of ChatGPT reveals that AI is having an unprecedented impact on scientific writing, with at least 13.5% of abstracts processed with LLMs, with some subcorpora reaching 40%.
#AIinScience #LargeLanguageModels
#AIinScience #LargeLanguageModels
Delving into LLM-assisted writing in biomedical publications through excess vocabulary
Excess words track LLM usage in biomedical publications.
www.science.org
July 6, 2025 at 5:15 PM
An analysis of abstracts of biomedical research published before and after the release of ChatGPT reveals that AI is having an unprecedented impact on scientific writing, with at least 13.5% of abstracts processed with LLMs, with some subcorpora reaching 40%.
#AIinScience #LargeLanguageModels
#AIinScience #LargeLanguageModels
New research published in Addictive Behaviors reveals that problematic porn use is correlated with greater psychological distress and tends to remain stable over time. Further research is still needed to clarify the causal relationships between variables.
#ClinicalPsychology #Addiction
#ClinicalPsychology #Addiction
Problematic porn use remains stable over time and is strongly linked to mental distress, study finds
A yearlong study of more than 4,000 U.S. adults found that problematic pornography use tends to persist over time and is strongly associated with higher levels of anxiety and depression, suggesting a…
www.psypost.org
June 16, 2025 at 2:45 PM
New research published in Addictive Behaviors reveals that problematic porn use is correlated with greater psychological distress and tends to remain stable over time. Further research is still needed to clarify the causal relationships between variables.
#ClinicalPsychology #Addiction
#ClinicalPsychology #Addiction
"[T]he data revealed that out-group hostility was more intense than in-group loyalty. Americans appear more motivated by whom they dislike than whom they support. Democrats displayed particularly strong negative feelings toward Republicans compared to the reverse."
#PoliticalPsychology
#PoliticalPsychology
Partisan identity drives social polarization more than race or religion, study finds
A new study reveals that political party affiliation is the most powerful driver of social polarization in the United States—surpassing race, religion, income, and education
www.psypost.org
June 15, 2025 at 12:00 AM
"[T]he data revealed that out-group hostility was more intense than in-group loyalty. Americans appear more motivated by whom they dislike than whom they support. Democrats displayed particularly strong negative feelings toward Republicans compared to the reverse."
#PoliticalPsychology
#PoliticalPsychology
Newly published research has found that interdependent self-construal and vertical and horizontal individualism moderate the association between trait mindfulness and prosocial helping, consistent with prior research on state mindfulness.
#Psychology #Mindfulness
#Psychology #Mindfulness
Trait Mindfulness and Prosocial Behavior: The Moderating Role of Self-Construals and Individualism - Mindfulness
Objectives Trait mindfulness is associated with many measures of individual well-being, but its relationship to prosocial behavior is less clear. Prior research found that a brief intervention…
link.springer.com
June 12, 2025 at 3:15 PM
Newly published research has found that interdependent self-construal and vertical and horizontal individualism moderate the association between trait mindfulness and prosocial helping, consistent with prior research on state mindfulness.
#Psychology #Mindfulness
#Psychology #Mindfulness
"Through extensive experimentation across diverse puzzles, we show that frontier LRMs face a complete accuracy collapse beyond certain complexities."
#AI #LargeLanguageModels
#AI #LargeLanguageModels
The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity
Recent generations of frontier language models have introduced Large Reasoning Models (LRMs) that generate detailed thinking processes before providing answers. While these models demonstrate improved performance on reasoning benchmarks, their fundamental capabilities, scaling properties, and limitations remain insufficiently understood.
ml-site.cdn-apple.com
June 10, 2025 at 1:30 PM
"Through extensive experimentation across diverse puzzles, we show that frontier LRMs face a complete accuracy collapse beyond certain complexities."
#AI #LargeLanguageModels
#AI #LargeLanguageModels
MIT no longer stands behind a highly publicized Doctoral student paper, which claimed that an AI tool at an unnamed materials science lab boosted material discovery and patent filing, but only for top performing scientists.
#AI #ArtificialIntelligence
#AI #ArtificialIntelligence
MIT Says It No Longer Stands Behind Student’s AI Research Paper
The university said it has no confidence in a widely circulated paper by an economics graduate student.
www.msn.com
June 2, 2025 at 2:00 PM
MIT no longer stands behind a highly publicized Doctoral student paper, which claimed that an AI tool at an unnamed materials science lab boosted material discovery and patent filing, but only for top performing scientists.
#AI #ArtificialIntelligence
#AI #ArtificialIntelligence
A new database created by French lawyer and data scientist Damien Charlotin reveals that there have been more than 120 court cases where attorneys have been caught using AI systems like ChatGPT due to the presence of fake ('hallucinated') case citations.
#AI #LawTech #LargeLanguageModels
#AI #LawTech #LargeLanguageModels
120 court cases have been caught with AI hallucinations, according to new database
More than 20 legal professionals have been busted in the past month alone.
mashable.com
May 30, 2025 at 7:45 PM
A new database created by French lawyer and data scientist Damien Charlotin reveals that there have been more than 120 court cases where attorneys have been caught using AI systems like ChatGPT due to the presence of fake ('hallucinated') case citations.
#AI #LawTech #LargeLanguageModels
#AI #LawTech #LargeLanguageModels
Large Language Models are 5x more prone to overgeneralize when generating scientific research summaries compared with human summarizers, with newer models being more prone on average to overgeneralization compared with older models.
#AI #AIinScience #LargeLanguageModels
#AI #AIinScience #LargeLanguageModels
Generalization bias in large language model summarization of scientific research
Artificial intelligence chatbots driven by large language models (LLMs) have the potential to increase public science literacy and support scientific research, as they can quickly summarize complex scientific information in accessible terms. However, when summarizing scientific texts, LLMs may omit details that limit the scope of research conclusions, leading to generalizations of results broader than warranted by the original study.
royalsocietypublishing.org
May 30, 2025 at 7:00 PM
Large Language Models are 5x more prone to overgeneralize when generating scientific research summaries compared with human summarizers, with newer models being more prone on average to overgeneralization compared with older models.
#AI #AIinScience #LargeLanguageModels
#AI #AIinScience #LargeLanguageModels
Multimodal Large Language Models fare poorly when asked to read the time from images of clocks or read the date from images of calendars, according to new research.
#AI #LargeLanguageModels
#AI #LargeLanguageModels
AI models can't tell time or read a calendar, study reveals
Challenges in visual and spatial processing and a deficit in training data have revealed a surprising lack of timekeeping ability in AI systems
www.livescience.com
May 27, 2025 at 8:00 PM
Multimodal Large Language Models fare poorly when asked to read the time from images of clocks or read the date from images of calendars, according to new research.
#AI #LargeLanguageModels
#AI #LargeLanguageModels
The Handbook of Social Psychology is celebrating its 90-year history by making its new 6th edition open access and free to download for everyone, in what its editors hope will become a model for future academic publications.
#Psychology #SocialPsychology
#Psychology #SocialPsychology
Handbook of Social Psychology Celebrates 90 Years with Groundbreaking Open-Access Sixth Edition | SPSP
SPSP is excited to announce the release of a landmark publication that has served as the definitive reference in our field for decades!
spsp.org
May 19, 2025 at 6:55 PM
The Handbook of Social Psychology is celebrating its 90-year history by making its new 6th edition open access and free to download for everyone, in what its editors hope will become a model for future academic publications.
#Psychology #SocialPsychology
#Psychology #SocialPsychology
A new computational analysis of world languages provides the first strong evidence for "lexical elaboration", the theory that the variety of words a given language uses to describe a concept reflects cultural specialization.
Source Paper: www.pnas.org/doi/10.1073/...
#Psycholinguistics
Source Paper: www.pnas.org/doi/10.1073/...
#Psycholinguistics
Linguists Find Proof of Sweeping Language Pattern Once Deemed a ‘Hoax’
Inuit languages really do have many words for snow, linguists found—and other languages have conceptual specialties, too, potentially revealing what a culture values
www.scientificamerican.com
May 16, 2025 at 3:15 PM
A new computational analysis of world languages provides the first strong evidence for "lexical elaboration", the theory that the variety of words a given language uses to describe a concept reflects cultural specialization.
Source Paper: www.pnas.org/doi/10.1073/...
#Psycholinguistics
Source Paper: www.pnas.org/doi/10.1073/...
#Psycholinguistics
New research in Science Advances proposes that eukaryotic life may perform quantum computation via superradiant proteins in cells acting as biological qubits, vastly more computationally powerful than neurons, and comparable to state-of-the-art quantum computers.
#QuantumBiology
#QuantumBiology
Computational capacity of life in relation to the universe
The discovery of life processing with UV-excited qubits supports a conjecture relative to the computing capacity of the universe.
www.science.org
May 8, 2025 at 2:30 PM
New research in Science Advances proposes that eukaryotic life may perform quantum computation via superradiant proteins in cells acting as biological qubits, vastly more computationally powerful than neurons, and comparable to state-of-the-art quantum computers.
#QuantumBiology
#QuantumBiology
New research, published in Development and Psychopathology, shows that alexithymia traits appear to mediate the relationship between autism traits and decreased emotion recognition.
#Autism #Alexithymia
#Autism #Alexithymia
Emotional recognition difficulties may stem more from alexithymia than autistic traits
People with higher autistic traits struggled to recognize emotions in human faces, but not in anime faces. However, this difficulty was fully explained by alexithymia.
www.psypost.org
May 4, 2025 at 5:15 PM
New research, published in Development and Psychopathology, shows that alexithymia traits appear to mediate the relationship between autism traits and decreased emotion recognition.
#Autism #Alexithymia
#Autism #Alexithymia
New research out of the Network Contagion Research Institute finds a concerning increase in the normalization of endorsing political violence by members of the far left, and the amplification thereof via online platforms.
#SocialPsychology #PoliticalPsychology
#SocialPsychology #PoliticalPsychology
4/7/25 - NCRI Assassination Culture Brief - Network Contagion Research Institute
networkcontagion.us
April 30, 2025 at 6:35 PM
New research out of the Network Contagion Research Institute finds a concerning increase in the normalization of endorsing political violence by members of the far left, and the amplification thereof via online platforms.
#SocialPsychology #PoliticalPsychology
#SocialPsychology #PoliticalPsychology
Participation by environmental scientists in climate change protests doesn't harm their credibility per se, but doesn't bolster support for the protestors, according to new research.
#Science #ClimateChange
#Science #ClimateChange
Out of the labs and into the streets: Effects of climate protests by environmental scientists | Royal Society Open Science
There have been increasing calls for scientists to ‘get out of the labs and into the
streets’ and become more involved in climate change advocacy and protest, including
civil disobedience. A growing n...
royalsocietypublishing.org
April 30, 2025 at 2:35 PM
Participation by environmental scientists in climate change protests doesn't harm their credibility per se, but doesn't bolster support for the protestors, according to new research.
#Science #ClimateChange
#Science #ClimateChange
A survey of more than 25,000 workers and more than 7000 workplaces has found that AI Chatbots have had virtually no impact, positive or negative, on jobs or wages, and only modest gains in productivity.
Source Paper: papers.ssrn.com/sol3/papers....
#AI #LargeLanguageModels
Source Paper: papers.ssrn.com/sol3/papers....
#AI #LargeLanguageModels
Generative AI is not replacing jobs or hurting wages at all
: 'When we look at the outcomes, it really has not moved the needle'
www.theregister.com
April 30, 2025 at 1:15 PM
A survey of more than 25,000 workers and more than 7000 workplaces has found that AI Chatbots have had virtually no impact, positive or negative, on jobs or wages, and only modest gains in productivity.
Source Paper: papers.ssrn.com/sol3/papers....
#AI #LargeLanguageModels
Source Paper: papers.ssrn.com/sol3/papers....
#AI #LargeLanguageModels
"There wasn't a single category in which the AI agents accomplished the majority of the tasks, says Graham Neubig, a computer science professor at CMU and one of the study's authors."
Source Paper (Preprint): arxiv.org/abs/2412.14161
#AI #LargeLanguageModels
Source Paper (Preprint): arxiv.org/abs/2412.14161
#AI #LargeLanguageModels
Carnegie Mellon staffed a fake company with AI agents. It was a total disaster.
A new study tested how AI agents performed in the workplace. The results show that AI isn't ready to do your job.
tech.yahoo.com
April 29, 2025 at 1:55 PM
"There wasn't a single category in which the AI agents accomplished the majority of the tasks, says Graham Neubig, a computer science professor at CMU and one of the study's authors."
Source Paper (Preprint): arxiv.org/abs/2412.14161
#AI #LargeLanguageModels
Source Paper (Preprint): arxiv.org/abs/2412.14161
#AI #LargeLanguageModels
The phrase "vegetative electron microscopy" is starting to pop up in dozens of research papers. No such thing exists. The phrase is a glitch in the training data for AI language models due to digitizing errors and mistranslations.
#AI #AIinScience #LargeLanguageModels
#AI #AIinScience #LargeLanguageModels
A weird phrase is plaguing scientific papers – and we traced it back to a glitch in AI training data
Once errors creep into the AI knowledge base, they can be very hard to get out.
theconversation.com
April 17, 2025 at 3:35 PM
The phrase "vegetative electron microscopy" is starting to pop up in dozens of research papers. No such thing exists. The phrase is a glitch in the training data for AI language models due to digitizing errors and mistranslations.
#AI #AIinScience #LargeLanguageModels
#AI #AIinScience #LargeLanguageModels
The limitations of "vibe coding" are having real world consequences. LLMs cannot be trusted and their outputs should always be sanity checked before being implemented, even with something like programming code where they really should be able to excel.
#AI #VibeCoding #LargeLanguageModels
#AI #VibeCoding #LargeLanguageModels
AI code suggestions sabotage software supply chain
: Hallucinated package names fuel 'slopsquatting'
www.theregister.com
April 13, 2025 at 7:35 PM
The limitations of "vibe coding" are having real world consequences. LLMs cannot be trusted and their outputs should always be sanity checked before being implemented, even with something like programming code where they really should be able to excel.
#AI #VibeCoding #LargeLanguageModels
#AI #VibeCoding #LargeLanguageModels
Behavior and decision researchers rarely give precise definitions of these terms. This is a crucial key to good science, ensuring that we are all researching, discussing, and debating the same things, and not talking past each other.
#Psychology #BehaviorScience
#Psychology #BehaviorScience
A call for precision in the study of behaviour and decision - Nature Human Behaviour
By definition, behavioural and decision scientists study behaviour and decision — but they rarely define these concepts, which results in divergent interpretations across studies. Researchers should g...
www.nature.com
April 7, 2025 at 8:25 PM
Behavior and decision researchers rarely give precise definitions of these terms. This is a crucial key to good science, ensuring that we are all researching, discussing, and debating the same things, and not talking past each other.
#Psychology #BehaviorScience
#Psychology #BehaviorScience
"[I]ndividuals with an unstable sense of self may turn to social media to craft a more coherent or idealized identity. However, because social media interactions lack real-world grounding and accountability, these self-perceptions can become increasingly detached from reality."
#Cyberpsychology
#Cyberpsychology
Social media's disturbing role in "delusion amplification" highlighted in new psychology research
A new systematic review examining over 2,500 studies has uncovered a troubling link between high social media use and psychiatric disorders that involve distortions of self-perception.
www.psypost.org
April 6, 2025 at 1:30 AM
"[I]ndividuals with an unstable sense of self may turn to social media to craft a more coherent or idealized identity. However, because social media interactions lack real-world grounding and accountability, these self-perceptions can become increasingly detached from reality."
#Cyberpsychology
#Cyberpsychology
"Across all four studies, participants consistently showed lower empathy for political outgroup members than for ingroup or neutral targets. ... Liberals exhibited significantly less empathy for conservatives than conservatives showed for liberals."
#SocialPsychology #PoliticalPsychology
#SocialPsychology #PoliticalPsychology
Study finds liberals show less empathy to political opponents than conservatives do
Liberals may struggle more than conservatives to empathize with political opponents, according to new psychological research.
www.psypost.org
April 5, 2025 at 1:36 AM
"Across all four studies, participants consistently showed lower empathy for political outgroup members than for ingroup or neutral targets. ... Liberals exhibited significantly less empathy for conservatives than conservatives showed for liberals."
#SocialPsychology #PoliticalPsychology
#SocialPsychology #PoliticalPsychology
New findings from Anthropic researchers have brought us one step closer to understanding some of the reasons why LLM AIs hallucinate.
#AI #LargeLanguageModels
#AI #LargeLanguageModels
Why do LLMs make stuff up? New research peers under the hood.
Claude’s faulty “known entity” neurons sometime override its “don’t answer” circuitry.
arstechnica.com
March 29, 2025 at 12:15 PM
New findings from Anthropic researchers have brought us one step closer to understanding some of the reasons why LLM AIs hallucinate.
#AI #LargeLanguageModels
#AI #LargeLanguageModels
"So far, the new test, called ARC-AGI-2, has stumped most models. 'Reasoning' AI models like OpenAI’s o1-pro and DeepSeek’s R1 score between 1% and 1.3%... Powerful non-reasoning models, including GPT-4.5, Claude 3.7 Sonnet, and Gemini 2.0 Flash, score around 1%."
#AI #AGI
#AI #AGI
A new, challenging AGI test stumps most AI models | TechCrunch
The Arc Prize Foundation has a new test for AGI that leading AI models from Anthropic, Google, and DeepSeek score poorly on.
techcrunch.com
March 28, 2025 at 2:05 PM
Diversity of experience, even as little as one new experience per day, is cognitively enriching for mood and memory, and may even be protective against neurodegenerative conditions like Alzheimer's Disease.
#CognitiveScience #CognitiveHealth
#CognitiveScience #CognitiveHealth
One new experience a day can boost memory and mood
Researchers at the University of Toronto have found that doing just one new thing each day can significantly improve mood, memory and overall well-being—a finding that could be particularly beneficial...
medicalxpress.com
March 22, 2025 at 2:11 PM
Diversity of experience, even as little as one new experience per day, is cognitively enriching for mood and memory, and may even be protective against neurodegenerative conditions like Alzheimer's Disease.
#CognitiveScience #CognitiveHealth
#CognitiveScience #CognitiveHealth