Benjamin Laufer
banner
laufer.bsky.social
Benjamin Laufer
@laufer.bsky.social
PhD student at Cornell Tech.

bendlaufer.github.io
Reposted by Benjamin Laufer
Here is a poster I presented today at the Human-AI Complementarity workshop hosted by the NSF #AI Institute for Societal Decision Making

You can read the latest version of the paper here: www.rafaelmbatista.com/jmp/

This is joint work with the wonderful @jamesross0.bsky.social
September 25, 2025 at 7:36 PM
Reposted by Benjamin Laufer
Saw Ben presenting this today. It’s really neat work.

Ben is finishing up his PhD at Cornell (advised by Jon Kleinberg) and is currently on the job market
In a new paper with @didaoh and Jon Kleinberg, we mapped the family trees of 1.86 million AI models on Hugging Face — the largest open-model ecosystem in the world.

AI evolution looks kind of like biology, but with some strange twists. 🧬🤖
September 25, 2025 at 7:04 PM
Reposted by Benjamin Laufer
This is quite clever and useful (read the full thread + the paper). I think/hope it opens up the path to a parallel study of their evolution on the epistemic/semantic space (i.e. what things they get better/worse at over time, what the utility gradients... 1/

via @tedunderwood.me
In a new paper with @didaoh and Jon Kleinberg, we mapped the family trees of 1.86 million AI models on Hugging Face — the largest open-model ecosystem in the world.

AI evolution looks kind of like biology, but with some strange twists. 🧬🤖
August 14, 2025 at 5:33 PM
In a new paper with @didaoh and Jon Kleinberg, we mapped the family trees of 1.86 million AI models on Hugging Face — the largest open-model ecosystem in the world.

AI evolution looks kind of like biology, but with some strange twists. 🧬🤖
August 14, 2025 at 3:06 PM
I am finding that AI chatbots and language models are rapidly changing my own personal research practices – and my own ethical judgments about the appropriateness of the use of AI.
June 3, 2025 at 12:52 PM
Excited to speak at Princeton @princetoncitp.bsky.social next week!
April 30 - CITP's Bias in AI Reading Group will hosts guest Ben Laufer from @cornelltech.bsky.social. Ben will be presenting work on regulation for fairness & safety along the AI development pipeline
April 25, 2025 at 3:12 AM
Reposted by Benjamin Laufer
I'm hiring for a machine learning data scientist & research assistant for summer 2025!

Join me on a project on invasive species management with an innovative startup doing on-the-ground removal of environmentally destructive invasive animals.

Paid, full-time w/ possibility to extend.
April 25, 2025 at 1:50 AM
Reposted by Benjamin Laufer
4) The “most common dog” in NYC is a Yorkshire Terrier named Bella. Jack Russel Terriers are often “Jack” and Charles Spaniels “Charlie.” Huskies are always named Luna—the reason for which is unclear (?).
April 2, 2025 at 2:16 PM
This was a lot of fun
Our lab had a #dogathon 🐕 yesterday where we analyzed NYC Open Data on dog licenses. We learned a lot of dog facts, which I’ll share in this thread 🧵

1) Geospatial trends: Cavalier King Charles Spaniels are common in Manhattan; the opposite is true for Yorkshire Terriers.
April 2, 2025 at 2:18 PM
I am in Boston, excited to give a talk at northeastern tomorrow 11am!

“Regulation along the AI Development Pipeline for Fairness, Safety and Related Goals”
March 31, 2025 at 9:53 PM
Reposted by Benjamin Laufer
(1/n) New paper/code! Sparse Autoencoders for Hypothesis Generation

HypotheSAEs generates interpretable features of text data that predict a target variable: What features predict clicks from headlines / party from congressional speech / rating from Yelp review?

arxiv.org/abs/2502.04382
March 18, 2025 at 3:29 PM
Reposted by Benjamin Laufer
Please repost to get the word out! @nkgarg.bsky.social and I are excited to present a personalized feed for academics! It shows posts about papers from accounts you’re following bsky.app/profile/pape...
March 10, 2025 at 3:12 PM
Reposted by Benjamin Laufer
We have a new review on generative AI in medicine, to appear in the Annual Review of Biomedical Data Science! We cover over 250 papers in the recent literature to provide an updated overview of use cases and challenges for generative AI in medicine.
December 18, 2024 at 4:14 PM
🪩New paper🪩 (WIP) appearing at @neuripsconf.bsky.social Regulatable ML and Algorithmic Fairness AFME workshop (oral spotlight).

In collaboration with @s010n.bsky.social and Manish Raghavan, we explore strategies and fundamental limits in searching for less discriminatory algorithms.
December 13, 2024 at 1:34 PM
Reposted by Benjamin Laufer
* Emerging scholars — a 2-year staff position in tech policy for candidates who have Bachelor’s degrees. It's an unusual program that combines classes, 1-on-1 mentoring, and work experience with real-world impact. Apply by Jan 10.
citp.princeton.edu/programs/cit...
Emerging Scholars in Information Policy - Center for Information Technology Policy
citp.princeton.edu
December 2, 2024 at 9:49 PM
Reposted by Benjamin Laufer
📢NEW: 'Open' AI systems aren't open. The vague term, combined w frothy AI hype is (mis)shaping policy & practice, assuming 'open source' AI democratizes access & addresses power concentration. It doesn't.

@smw.bsky.social, @davidthewid.bsky.social & I correct the record👇
nature.com/articles/s41...
Why ‘open’ AI systems are actually closed, and why this matters - Nature
A review of the literature on artificial intelligence systems to examine openness reveals that open AI systems are actually closed, as they are highly dependent on the resources of a few large corpora...
nature.com
December 2, 2024 at 2:23 PM
Reposted by Benjamin Laufer
howdy!

the Georgetown Law Journal has published "Less Discriminatory Algorithms." it's been very fun to work on this w/ Emily Black, Pauline Kim, Solon Barocas, and Ming Hsu.

i hope you give it a read — the article is just the beginning of this line of work.

www.law.georgetown.edu/georgetown-l...
November 18, 2024 at 4:40 PM
Reposted by Benjamin Laufer
genAI has made us more suspicious that emails, cover letters, artworks, etc. are produced by AI. this shift forces us to change our behavior in order to prove our human-ness: a "burden of authenticity".
waking my account up to share a recent blog post on the subject: rajivmovva.com/2024/11/08/g...
In a world full of AI, authenticity will be the most valuable thing in the universe.
November 30, 2024 at 6:09 PM
I passed my “A Exam” yesterday meaning I am officially a “PhD Candidate” rather than a “PhD Student.” (Huge title change, I know.)

Thanks to everybody who has supported me along the way!
November 26, 2024 at 5:38 PM
Reposted by Benjamin Laufer
Hey! @friedler.net made a FAccT starter pack: bsky.app/starter-pack...
November 19, 2024 at 3:52 AM
Hi to my new connections. Is Bluesky taking off? I’m excited!!
November 17, 2024 at 10:13 PM
In a new essay for @knightcolumbia.org with Helen Nissenbaum, we offer an account of what's wrong with social media, and what's at stake.

We also discuss generative AI and, broadly, the problems posed by untrustworthy algorithmic systems.
NEW "Optimizing for What?" ESSAY: Algorithmic Displacement of Social Trust by Cornell Tech's Ben Laufer & Helen Nissenbaum. They outline existential threats posed by what they call "problematic" algorithmic amplification, and the processes these inform. knightcolumbia.org/content/algo...
December 5, 2023 at 9:29 PM
Reposted by Benjamin Laufer
Great paper!
"…algorithmic amplification is problematic because...it chokes out trustworthy processes that we have relied on for guiding valued societal practices and for selecting, elevating, and amplifying content” via @laufer.bsky.social, Helen Nissenbaum
knightcolumbia.org/content/algo...
December 5, 2023 at 9:27 PM
I am in Boston giving a talk tomorrow at Harvard’s EconCS seminar (1:30pm).

The talk is on genAI/ML technologies billed as "general-purpose". I'll discuss: which purposes, why and how? It's ongoing work with Hoda Heidari and Jon Kleinberg.

HMU if you're around to meet, etc!
November 2, 2023 at 4:14 PM
Reposted by Benjamin Laufer
Recently migrated colleagues: I’m on the job market! I’ll have a PhD from Cornell IS in 2024. My lab will explore how tenets of social responsibility can be realized in computer science + engineering. I’m open to TT faculty positions and research-oriented industry roles. More about me: emtseng.me
Emily Tseng
Emily studies computing and technology design.
emtseng.me
October 8, 2023 at 4:57 PM