Hersh Gupta
banner
hershgupta.com
Hersh Gupta
@hershgupta.com
Lead Applied Scientist, Responsible AI @BCGX | @bostonu.bsky.social alum | Data, AI, and strategy enthusiast | Open-source contributor

Opinions are my own

#bikeboston #coys

📍DC -> BOS
Reposted by Hersh Gupta
Companies seemed to realize that "I want to talk to a human" is used all too often and they put up more barriers before the bot actually connects to customer support

This is where I lose my patience with the whole thing, after the 3rd time the agent blocks me from doing it, as a paying customer
November 4, 2025 at 4:16 PM
These kinds of incidents could be avoided if the people making public technology procurement decisions had really any knowledge of technology instead of uncritically relying on what tech salespeople tell them
Police in Baltimore handcuffed a child and put him on his knees after AI mistook his Doritos bag for a weapon. Dystopian nightmare fuel.
October 26, 2025 at 6:24 PM
Reposted by Hersh Gupta
i love giving entities that are more gullible than my grandma access to both my full home infrastructure and unfiltered external communication methods
August 6, 2025 at 2:44 PM
Reposted by Hersh Gupta
From now on we are required by law to call them fair use machines
everyone got paid so we can stop calling them plagiarism machines
September 5, 2025 at 9:28 PM
Reposted by Hersh Gupta
Not sure there is a clearer tell that AI *users* and their *personal* and *work* data are the product, not the Artificial Intelligence-Ignorance LLMs themselves. www.bloomberg.com/news/article...
OpenAI Offers ChatGPT for $1 a Year to US Government Workers
OpenAI is providing access to its ChatGPT product to US federal agencies at a nominal cost of $1 a year as part of a push to get its AI chatbot more widely adopted.
www.bloomberg.com
August 6, 2025 at 8:09 PM
Reposted by Hersh Gupta
I've been trying to articulate why the fawning, complimentary responses from AI chatbots feel so insidious to me. I've finally figured out how to explain it.

Wrote a long piece on how current model training and design choices threaten our critical thinking skills: maggieappleton.com/ai-enlighten...
A Treatise on AI Chatbots Undermining the Enlightenment
On chatbot sycophancy, passivity, and the case for more intellectually challenging companions
maggieappleton.com
August 6, 2025 at 9:34 AM
Reposted by Hersh Gupta
Don't leave AI to the STEM folks.

They are often far worse at getting AI to do stuff than those with a liberal arts or social science bent. LLMs are built from the vast corpus human expression, and knowing the history & obscure corners of human works lets you do far more with AI & get its limits.
July 20, 2025 at 6:06 PM
Reposted by Hersh Gupta
🚨 New paper from us: Given they are trained on human data, can you use psychological techniques that work on humans to persuade AI?

Yes! Applying Cialdini's principles for human influence more than doubles the chance of GPT-4o-mini agreeing to objectionable requests compared to controls.
July 18, 2025 at 5:08 PM
Reposted by Hersh Gupta
Elon saying he’s going to reprogram his chatbot’s view of history is a perfect example of why these kinds of AI products aren’t the neutral “tools” people always defend them as. They’re little political projects reflecting the ideologies of their creators
June 21, 2025 at 2:01 PM
Reposted by Hersh Gupta
Got distracted today & did a little experiment on ChatGPT 4o.

I told it I had a question about AI & asked it to suggest six experts to contact.

In one condition, I put "tough-minded and rigorous" before "experts".

In the other condition, I put "friendly and kind" before "experts".

⬇️
June 13, 2025 at 11:28 PM
Reposted by Hersh Gupta
Not good. Thank you to @wired.com for documenting and to the researchers for speaking up.
NIST has issued new instructions to scientists that partner with the US Artificial Intelligence Safety Institute (AISI) that eliminate mention of 'AI safety,' 'responsible AI,' and 'AI fairness' in order to reduce 'ideological bias, to enable human flourishing and economic competitiveness.'
Under Trump, AI Scientists Are Told to Remove ‘Ideological Bias’ From Powerful Models
A directive from the National Institute of Standards and Technology eliminates mention of “AI safety” and “AI fairness.”
www.wired.com
March 15, 2025 at 10:09 AM
I feel like my ability to figure out whether someone knows their stuff when they're talking about AI has become so sharply honed the more time I spend switching contexts between technical and non-technical people
March 11, 2025 at 11:23 PM
Anthropic’s MCP should be more widely used in enterprise
my biggest worry is that MCP is to technical to be embraced by the enterprise

it’s got huge potential, but it’s in its nerd infancy rn
MCP Demystified

for real, i tried to make this one extremely dumb and terse. it jumps straight into a bunch of analogies & FAQs

honestly curious if anyone thinks it's useful

timkellogg.me/blog/2025/03...
March 7, 2025 at 8:29 PM
An X engineer posted this output from Grok to demonstrate how "good" their LLM is (CW: racism)
February 24, 2025 at 5:40 PM
Reposted by Hersh Gupta
The left version of ‘run government like a business’ and ‘cut red tape’ is that there are a ton of asinine constraints that the public sector is under, things that would instantly be recognized as insane practices in a well run private firm, in the name of ‘small government’.
February 11, 2025 at 12:26 AM
Reposted by Hersh Gupta
This is sort of the skeleton key to understanding a lot of shit: there 100% is a non-trivial amount of inefficiency and waste in the federal bureaucracy, and most of it is a *direct result* of ‘reforms’ that are meant to assuage the people who complain about “waste and inefficiency in govt”
for decades the federal govt has bent over backwards to limit spending even on totally sensible things like office coffee, all so they can say your tax dollars aren't going to pay for coffee. and it was all for nothing bc elon is tweeting out conspiracy theories about non-existent fraud.
February 11, 2025 at 12:15 AM
The most effective responsible generative AI technique is to proactively gatekeep LLMs from decision-makers who don't know how transformers work
February 10, 2025 at 7:19 PM
A lot of people don't know how useful Model Context Protocol is - easily Anthropic's most underrated feature. Allows Claude to use your computer and applications with natural language. Official and unofficial MCP servers, which are like extensions, here:

www.mcpservers.ai
MCP Servers
Browse the largest library of Model Context Protocol Servers. Share Model Context Protocol Servers you create with others.
www.mcpservers.ai
February 8, 2025 at 9:14 PM
If you have to prompt engineer confidence scores, then you already know they're not going to be reliable.

arxiv.org/abs/2412.14737
On Verbalized Confidence Scores for LLMs
The rise of large language models (LLMs) and their tight integration into our daily life make it essential to dedicate efforts towards their trustworthiness. Uncertainty quantification for LLMs can es...
arxiv.org
February 8, 2025 at 2:25 PM
If you, like me, have a feeling of despair reading this news, I'd ask that you consider donating to the organizations who do this work:

Global Fund to Fight AIDS, Tuberculosis, and Malaria: act.unfoundation.org/FJvB3vUCJUep...

International Medical Corps: internationalmedicalcorps.org?form=Main
NEW: Organizations that provide vital, lifesaving care for desperate and vulnerable people around the world have been forced to halt operations, turn away patients and lay off staff.

“I’ve never seen anything that scares me as much as this,” one doctor said.
“People Will Die”: The Trump Administration Said It Lifted Its Ban on Lifesaving Humanitarian Aid. That’s Not True.
Organizations that provide vital care for desperate and vulnerable people around the world have been forced to halt operations, turn away patients and lay off staff. “I’ve never seen anything that sca...
www.propublica.org
February 1, 2025 at 8:06 PM
Reposted by Hersh Gupta
It's actually theft of taxpayers money when websites go dark. We paid for that data
February 1, 2025 at 12:55 AM
Reposted by Hersh Gupta
okay so what legal authority does elon musk have to commandeer the office of personnel management and access reams of sensitive data? www.reuters.com/world/us/mus...
Exclusive: Musk aides lock government workers out of computer systems at US agency, sources say
Aides to Elon Musk charged with running the U.S. government human resources agency have locked career civil servants out of computer systems that contain the personal data of millions of federal employees, according to two agency officials.
www.reuters.com
January 31, 2025 at 8:17 PM
Reposted by Hersh Gupta
Today, we are publishing the first-ever International AI Safety Report, backed by 30 countries and the OECD, UN, and EU.

It summarises the state of the science on AI capabilities and risks, and how to mitigate those risks. 🧵

Full Report: assets.publishing.service.gov.uk/media/679a0c...

1/21
January 29, 2025 at 1:50 PM
positron is the best IDE for data science and it's been my daily driver for over a year
removing vscode from the dock and going all-in on positron for everything
January 29, 2025 at 6:21 PM
Another takeaway is that we've been Stockholm Syndrome'd to think pandas has a reasonable and intuitive API for data analysis
I wasn't aware of the Pivot Table concept, and spent some time writing a script to compute stats over groups in Pandas.

After some tinkering (stackoverflow.com/questions/35...) I was able to do it in ~6 hours....

Later on, I learned about pivot tables. The same operation took 80 seconds! 😅
January 29, 2025 at 5:23 PM