John Sohrawardi
banner
nviable.bsky.social
John Sohrawardi
@nviable.bsky.social
DeFake Project Lead | HCI, AI, Cybersecurity | Fighting #deepfakes | RIT
You have to be kidding me ... 🫠

The new agreement removes mention of developing tools “for authenticating content and tracking its provenance” as well as “labeling synthetic content,” signaling less interest in tracking misinformation and deep fakes.
www.wired.com/story/ai-saf...
Under Trump, AI Scientists Are Told to Remove ‘Ideological Bias’ From Powerful Models
A directive from the National Institute of Standards and Technology eliminates mention of “AI safety” and “AI fairness.”
www.wired.com
March 16, 2025 at 6:31 PM
Even intuitively this makes sense, especially given that we can't version closed models.

However, a useful paper to cite with empirical information for people using LMs for text analysis.

Given this, there are still research avenues for closed source LMs where limited reproducibility is tolerable.
Pleased to share the latest version of my paper with Arthur Spirling and @lexipalmer.bsky.social on replication using LMs

We show:

1. current applications of LMs in political science research *don't* meet basic standards of reproducibility...
December 18, 2024 at 12:54 PM