Lara J. Martin
banner
laramartin.net
Lara J. Martin
@laramartin.net
Applying and improving #NLP/#AI but in a safe way. Teaching computers to tell stories, play D&D, and help people talk (accessibility).
Assistant Prof in CSEE @ UMBC.
🏳️‍🌈♿
https://laramartin.net
Pinned
I'm pleased to announce that Shadab Choudhury & Asha Kumar's paper "Evaluating Human-LLM Representation Alignment: A Case Study on Affective Sentence Generation for Augmentative and Alternative Communication" has been accepted to IJCNLP-AACL Findings 2025!
arxiv.org/abs/2503.11881
#AACL2025 #IJCNLP
Evaluating Human-LLM Representation Alignment: A Case Study on Affective Sentence Generation for Augmentative and Alternative Communication
Gaps arise between a language model's use of concepts and people's expectations. This gap is critical when LLMs generate text to help people communicate via Augmentative and Alternative Communication ...
arxiv.org
Reposted by Lara J. Martin
Looking forward to seeing friends and colleagues in Edmonton this week for AIIDE! I'll be trying my hand at live tweeting the paper sessions at both the main conference and EXAG/INT, which I'll try to keep in this big reply thread (starting off tomorrow). @aiide.bsky.social @exag.bsky.social
November 10, 2025 at 12:39 AM
I'm pleased to announce that Shadab Choudhury & Asha Kumar's paper "Evaluating Human-LLM Representation Alignment: A Case Study on Affective Sentence Generation for Augmentative and Alternative Communication" has been accepted to IJCNLP-AACL Findings 2025!
arxiv.org/abs/2503.11881
#AACL2025 #IJCNLP
Evaluating Human-LLM Representation Alignment: A Case Study on Affective Sentence Generation for Augmentative and Alternative Communication
Gaps arise between a language model's use of concepts and people's expectations. This gap is critical when LLMs generate text to help people communicate via Augmentative and Alternative Communication ...
arxiv.org
November 11, 2025 at 9:27 PM
Reposted by Lara J. Martin
There is one week left to apply to join us at Rutgers! We're hiring an Assistant Professor in Computational Sociology as part of a cluster of new hires in data science and AI.

Applications are due next Wednesday, 10/15.
Assistant Professor in Computational Sociology
The Department of Sociology at Rutgers University, New Brunswick, seeks applications for a tenure-track position at the Assistant Professor level specializing in Computational Sociology.  The search i...
jobs.rutgers.edu
October 9, 2025 at 2:11 PM
Thrilled to announce that my lab has had two papers accepted to the Wordplay Workshop at EMNLP 2025!!
September 30, 2025 at 7:11 PM
Congratulations to the winners of the Commonsense Personal Grounded Dialogue Challenge! discourse.aicrowd.com/t/winners-ca...
we look forward to hearing about the details of your systems at the EMNLP 2025 Wordplay Workshop!
🏆 Winners & Call for Paper
Hello all, Thank you for being part of the Commonsense Persona Grounded Dialogue Challenge 2025! Whether you finished at the top or joined for the first time, we’re glad you participated in this seco...
discourse.aicrowd.com
August 29, 2025 at 6:36 PM
Reposted by Lara J. Martin
Hidden Door is an AI storytelling game that actually makes sense
Hidden Door is an AI storytelling game that actually makes sense
You’ll help write what happens next.
buff.ly
August 13, 2025 at 2:10 PM
Submission deadline has been extended to September 12th! Everyone tell your friends! Submit your paper to this awesome workshop!
wordplay-workshop.github.io
August 13, 2025 at 2:27 PM
The CFP for the Wordplay Workshop at #EMNLP2025 is out! We welcome any work that sits at the intersection of #NLP and games/narrative.
We are looking for regular papers (4-8 pgs) and extended abstracts about open challenges in the space (2 pgs)

Papers due: August 29
wordplay-workshop.github.io/cfp/
/call_for_papers
Official website for the Wordplay Workshop at EMNLP 2025. Exploring interactive narratives, text-adventure games, and AI agents in language-based environments. Join us in Suzhou, China, November 5th-9...
wordplay-workshop.github.io
June 17, 2025 at 4:17 PM
Reposted by Lara J. Martin
US President Donald Trump’s proposed budget for fiscal year 2026 calls for unprecedented cuts to scientific agencies that, if enacted, would deal a devastating blow to US science, policy specialists say.

https://go.nature.com/4jYv3IP
Trump proposes unprecedented budget cuts to US science
Huge reductions, if enacted, could have ‘catastrophic’ effects on US competitiveness and the scientific pipeline, critics say.
go.nature.com
May 2, 2025 at 10:25 PM
Reposted by Lara J. Martin
every definition and description of "AI Slop" reads to me as what for-profit, highly metricized, and algorithmic platforms have always done. making it primarily an "AI" issue lets the platforms and this general logic too much off the hook imo
«Monetized ‹brainrot› reels is generative AI’s killer app; this type of content is how people are making money with AI ...» Platform capitalism has created the infrastructural conditions for a globalized grifter economy, now genAI is giving them the means to produce content as SPAM on scale
1/
'Brainrot' AI on Instagram Is Monetizing the Most Fucked Up Things You Can Imagine (and Lots You Can't)
The hottest use of AI right now? Dora the Explorer feet mukbang; Peppa the Pig Skibidi toilet explosion; Steph Curry and LeBron James Ahegao Drakedom threesome.
www.404media.co
May 2, 2025 at 3:38 PM
What data is he pulling this from? because back when people were using Facebook, people would have 100+ "friends" on average. 😂
Mark Zuckerberg says Meta's chatbots will supplement your real friends: "The average American has fewer than 3 friends ... but has demand for ... 15 friends" (h/t x.com/romanhelmetg...)
May 1, 2025 at 5:07 PM
Reposted by Lara J. Martin
This is one of the worst violations of research ethics I've ever seen. Manipulating people in online communities using deception, without consent, is not "low risk" and, as evidenced by the discourse in this Reddit post, resulted in harm.

Great thread from Sarah, and I have additional thoughts. 🧵
The mods of r/ChangeMyView shared the sub was the subject of a study to test the persuasiveness of LLMs & that they didn't consent. There’s a lot that went wrong, so here’s a 🧵 unpacking it, along with some ideas for how to do research with online communities ethically. tinyurl.com/59tpt988
From the changemyview community on Reddit
Explore this post and more from the changemyview community
tinyurl.com
April 26, 2025 at 10:25 PM
Reposted by Lara J. Martin
A key difference here is that while either can be incorrect, the structure of Wikipedia *creates context* and the structure of LLMs *destroys context*

Wikipedia has linked sources and an edit history showing where information came from and who added it when

An LLM just generates text
Some of the anti-AI stuff feels a bit like when people would say "don't use Wikipedia as a source." It's just like anything else, a piece of information that you weigh against multiple sources and your own understanding of its likely failure modes
April 26, 2025 at 6:39 PM
Reposted by Lara J. Martin
ATTENTION: NSF GRANT RECIPIENTS

We received a heads up from a trusted source that you should proactively download/print/screen shot any documentation on research.gov pertaining to your NSF awards, both those that are current and any that have closed in the last 5-6 years.

1/n
a cartoon of a man holding a frying pan and a spoon with red alert written above him
ALT: a cartoon of a man holding a frying pan and a spoon with red alert written above him
media.tenor.com
April 24, 2025 at 9:09 PM
Reposted by Lara J. Martin
Northwestern just announced that the university will meet the funding needs of any research that is being impacted by stop orders. LET'S GOOOOOOOOOOOOO! #edusky #academicsky

wgntv.com/evanston/nor...
Northwestern University to self-fund research despite Trump administration freeze
While university officials said they’ve still not received official word of such action, they acknowledged receiving about 100 stop-work orders from the federal government on roughly 100 fede…
wgntv.com
April 19, 2025 at 11:17 AM
Reposted by Lara J. Martin
My NSF CS for All grant was just terminated (www.nsf.gov/awardsearch/...).

Nationwide, we were:
• Improving the quantity + quality of K-12 CS teachers
• Ensuring the sustainability of K-12 CS teacher prep programs
• Ensuring all youth in the U.S. have access to great CS teachers
NSF Award Search: Award # 2318257 - Collaborative Research: An Equitable, Justice-Focused Ecosystem for Pacific Northwest Secondary CS Teaching Lock
www.nsf.gov
April 18, 2025 at 11:20 PM
Reposted by Lara J. Martin
Can confirm that my NSF grant "How False Beliefs Form & How to Correct Them" was cancelled today because it is "not in alignment with current NSF priorities" Shocking that understanding how people are misled by false information is now a forbidden topic. Our work will continue but at a smaller scale
NSF has posted an “update on priorities.”

They’re canceling all “DEI and misinformation/disinformation” grants.

And the guidance on how to fulfill the longstanding, legally mandated Broadening Participation requirement is utterly incoherent.

www.nsf.gov/updates-on-p...
Updates on NSF Priorities
www.nsf.gov
April 18, 2025 at 10:40 PM
Reposted by Lara J. Martin
I love how you can have an overview of the internet in 1995 by... looking in a book that lists 8,000+ websites with photos lol
April 18, 2025 at 10:45 PM
I really love this idea that venture capitalists and tech companies are just doing reinforcement learning IRL:

"With access to infinite resources, these adherents ran reinforcement learning on their companies. They threw billions of dollars' worth of spaghetti at the wall until something stuck."
What if instead of The Terminator and The Matrix, we used Severance and South Park to think about AI?
Maybe just believing in AGI makes AGI exist.
Kill the wise one!
www.argmin.net
April 14, 2025 at 3:14 PM
Reposted by Lara J. Martin
1. LLM-generated code tries to run code from online software packages. Which is normal but
2. The packages don’t exist. Which would normally cause an error but
3. Nefarious people have made malware under the package names that LLMs make up most often. So
4. Now the LLM code points to malware.
LLMs hallucinating nonexistent software packages with plausible names leads to a new malware vulnerability: "slopsquatting."
LLMs can't stop making up software dependencies and sabotaging everything
: Hallucinated package names fuel 'slopsquatting'
www.theregister.com
April 12, 2025 at 11:43 PM
Reposted by Lara J. Martin
The bipartisan belief that America would be a better place if *other people* worked in a factory.

www.ft.com/content/8459...
April 13, 2025 at 5:44 PM
Reposted by Lara J. Martin
I am *horrified* by Sam Altman's suggestion that ethical AI limits can be gathered through AI itself being used to scale out having conversations with hundreds of millions of human users of OpenAI products and it being a matter of majority consensus mediated through his machine. Ethics is a field!
April 11, 2025 at 8:42 PM
Reposted by Lara J. Martin
'Bluesky has overtaken its flailing rival X in hosting posts related to new academic research, indicating the platform is fast becoming the go-to place for scholars to share their work.'
X’s dominance ‘over’ as Bluesky becomes new hub for research
Data indicates more scholars turning to alternative social media site to post about their work after Elon Musk’s Twitter takeover
www.timeshighereducation.com
April 9, 2025 at 7:14 AM
Reposted by Lara J. Martin
Reminder that for two more days @assistiveware.bsky.social is offering a 50% discount on your favorite AssistiveWare apps like the symbol/text-to-speech of Proloquo2Go. Sale ends April 9.

www.assistiveware.com/aac-app-sale #AutismAcceptanceMonth
Autism Acceptance Month Sale 2025 - AssistiveWare
Everyone has something to say. Working closely with the AAC community, we build apps and share best practice to help you communicate with the world.
www.assistiveware.com
April 7, 2025 at 6:21 PM