Mor Naaman
banner
informor.bsky.social
Mor Naaman
@informor.bsky.social
Cornell Tech professor (information science, AI-mediated Communication, trustworthiness of our information ecosystem). New York City. Taller in person. Opinions my own.
As an academic w/ social science overlap: don't be too alarmed yet. There is a wide array of validation studies trying to establish whether this kind of simulation holds water as a methodology for some type of studies. I have yet to see a publication in a top venue that uses AI simulation outright.
November 7, 2025 at 5:22 PM
a man is walking down a snowy hill with a camera .
Alt: Setting up a good waiting post.
media.tenor.com
November 6, 2025 at 6:24 PM
... and does not allow them to take appropriate measures to stop the abuse.

bsky.app/profile/info...
This from @craigsilverman.bsky.social's story nails it. The companies generally do not like spam ads or need the revenue from them. They simply can't fight the spam effectively without adding friction that will result in “too much good revenue flushed out”. So they make it OUR problem.

#Regulation
Rob Leathern and Rob Goldman, who both worked at Meta, are launching a new nonprofit that aims to bring transparency to an increasingly opaque, scam-filled social media ecosystem. www.wired.com/story/scam-a...
November 6, 2025 at 3:20 PM
I think history shows that the fines rarely reach the level of pain required for the companies to act
November 6, 2025 at 3:17 PM
Yes -- they can afford to lose 10%. But the hit they will have to take is a lot greater than that.

bsky.app/profile/carl...
(Reuters) - Meta internally projected late last year that it would earn about 10% of its overall annual revenue – or $16 billion – from running advertising for scams and banned goods, internal company documents show.

$META @reuters.com
www.reuters.com/investigatio...
November 6, 2025 at 2:12 PM
There is no ranked choice this time.
November 4, 2025 at 11:47 PM
We are just starting -- feedback ideas and suggestions welcome!
November 2, 2025 at 7:59 PM
November 2, 2025 at 7:07 PM
And here is why not clearly doing exactly that task is a criminal-level blow to our society

bsky.app/profile/hype...
Again, this precise use case is why these systems exist-to proliferate people’s racist (transphobic, misogynistic…) imaginary at scale. This should not be seen as a “misuse” but rather the product being used exactly as intended.
Racist Influencers Using OpenAI's Sora to Make it Look Like Poor People Are Selling Food Stamps for Cash
Folks looking for evidence of SNAP recipients as welfare queens have no shortage of AI generated schlock to use as justification.
futurism.com
November 2, 2025 at 1:17 PM
Looks like it was discovered by some, so we are soft-launching. Here's the link

stechlab-labels.bsky.social
stechlab-labels.bsky.social
A research project from Cornell Tech, investigating using automated signals to help users have more context about the accounts they are interacting with. Contact: stech.bluesky.labeler@gmail.com Prof...
stechlab-labels.bsky.social
November 2, 2025 at 11:38 AM
Looks like it was discovered by some, so we are soft-launching. Here's the link

stechlab-labels.bsky.social
stechlab-labels.bsky.social
A research project from Cornell Tech, investigating using automated signals to help users have more context about the accounts they are interacting with. Contact: stech.bluesky.labeler@gmail.com Prof...
stechlab-labels.bsky.social
November 2, 2025 at 11:38 AM
We are just getting started. Suggestions feedback and ideas welcome!
November 2, 2025 at 12:50 AM
That's exactly how I imagined the bridge! (Well maybe since I live here and have seen such structures).
November 1, 2025 at 7:18 PM
We're just getting started. Happy to talk to the team about the philosophy behind it, and the plan!
November 1, 2025 at 4:07 PM
That's awful. I'm truly sorry you had to receive a fake apology; that's incredibly disrespectful. You deserved better than that.

😇
November 1, 2025 at 12:44 AM
Interestingly, @simondedeo.bsky.social uses exactly this context of apology as a place where people can use "Mental Proof" to overcome the perception of AI use, by *credibly* communicating intentions -- based on proof of shared knowledge and values.

ojs.aaai.org/index.php/AA...
Undermining Mental Proof: How AI Can Make Cooperation Harder by Making Thinking Easier | Proceedings of the AAAI Conference on Artificial Intelligence
ojs.aaai.org
October 31, 2025 at 5:11 PM