Mor Naaman
banner
informor.bsky.social
Mor Naaman
@informor.bsky.social

Cornell Tech professor (information science, AI-mediated Communication, trustworthiness of our information ecosystem). New York City. Taller in person. Opinions my own.

Mor Naaman is a professor of information science at Cornell Tech. He is the founder of the Connective Media Hub and director of the Connective Media degree program. Naaman is known for foundational work on tagging behavior on social networking sites, the use of sites such as Twitter as social awareness streams, and real-world identification from social network activity. His research in these areas has been cited over 12,000 times on Google Scholar. .. more

Computer science 43%
Physics 15%

As an academic w/ social science overlap: don't be too alarmed yet. There is a wide array of validation studies trying to establish whether this kind of simulation holds water as a methodology for some type of studies. I have yet to see a publication in a top venue that uses AI simulation outright.

... and does not allow them to take appropriate measures to stop the abuse.

bsky.app/profile/info...
This from @craigsilverman.bsky.social's story nails it. The companies generally do not like spam ads or need the revenue from them. They simply can't fight the spam effectively without adding friction that will result in “too much good revenue flushed out”. So they make it OUR problem.

#Regulation
Rob Leathern and Rob Goldman, who both worked at Meta, are launching a new nonprofit that aims to bring transparency to an increasingly opaque, scam-filled social media ecosystem. www.wired.com/story/scam-a...

I think history shows that the fines rarely reach the level of pain required for the companies to act

Yes -- they can afford to lose 10%. But the hit they will have to take is a lot greater than that.

bsky.app/profile/carl...
(Reuters) - Meta internally projected late last year that it would earn about 10% of its overall annual revenue – or $16 billion – from running advertising for scams and banned goods, internal company documents show.

$META @reuters.com
www.reuters.com/investigatio...
(Reuters) - Meta internally projected late last year that it would earn about 10% of its overall annual revenue – or $16 billion – from running advertising for scams and banned goods, internal company documents show.

$META @reuters.com
www.reuters.com/investigatio...

This from @craigsilverman.bsky.social's story nails it. The companies generally do not like spam ads or need the revenue from them. They simply can't fight the spam effectively without adding friction that will result in “too much good revenue flushed out”. So they make it OUR problem.

#Regulation
Rob Leathern and Rob Goldman, who both worked at Meta, are launching a new nonprofit that aims to bring transparency to an increasingly opaque, scam-filled social media ecosystem. www.wired.com/story/scam-a...
Scam Ads Are Flooding Social Media. These Former Meta Staffers Have a Plan
Rob Leathern and Rob Goldman, who both worked at Meta, are launching a new nonprofit that aims to bring transparency to an increasingly opaque, scam-filled social media ecosystem.
www.wired.com

Wow this speech is incredibly bold and unapologetically ambitious. #Mamdani

There is no ranked choice this time.

Made a math worksheet for the kid. How evil am I? Answer on a scale of 1--6.7

We are just starting -- feedback ideas and suggestions welcome!

Our daily progress towards a @nealstephenson.bsky.social society
The nation’s largest police fleet of Tesla Cybertrucks is set to begin patrolling the streets of Las Vegas in November thanks to a donation from a U.S. tech billionaire, raising concerns about the blurring of lines between public and private interests.
Nation's largest fleet of police Cybertrucks to patrol Las Vegas
The nation’s largest police fleet of Tesla Cybertrucks is set to begin patrolling the streets of Las Vegas in November.
bit.ly
In which Palantir recruits high school students for fellowships by telling them to skip college because its holds little value and then puts them through a cherry-picked curriculum that oddly resembles… college
The older I get the more I value conscientiousness over raw intelligence or anything like that — when someone has completed college that’s a stronger signal of being able to handle tasks in an independent environment on a consistent basis: www.wsj.com/business/pal...
Palantir Thinks College Might Be a Waste. So It’s Hiring High-School Grads.
Tech company offers 22 teens a chance to skip college for its fellowship, which includes a four-week seminar on Western civilization
www.wsj.com

And here is why not clearly doing exactly that task is a criminal-level blow to our society

bsky.app/profile/hype...
Again, this precise use case is why these systems exist-to proliferate people’s racist (transphobic, misogynistic…) imaginary at scale. This should not be seen as a “misuse” but rather the product being used exactly as intended.
Racist Influencers Using OpenAI's Sora to Make it Look Like Poor People Are Selling Food Stamps for Cash
Folks looking for evidence of SNAP recipients as welfare queens have no shortage of AI generated schlock to use as justification.
futurism.com

Ok if the stupidest possible tech reporting is happening, at least let it be in the Athletic

www.nytimes.com/athletic/676...
Seattle Reign’s Laura Harvey says ChatGPT inspired NWSL tactics: ‘It said play a back five, so I did’
The 45-year-old former Arsenal coach said she casually quizzed the AI chatbot on ideas it had for individual teams.
www.nytimes.com

Looks like it was discovered by some, so we are soft-launching. Here's the link

stechlab-labels.bsky.social
stechlab-labels.bsky.social
A research project from Cornell Tech, investigating using automated signals to help users have more context about the accounts they are interacting with. Contact: stech.bluesky.labeler@gmail.com Prof...
stechlab-labels.bsky.social

Looks like it was discovered by some, so we are soft-launching. Here's the link

stechlab-labels.bsky.social
stechlab-labels.bsky.social
A research project from Cornell Tech, investigating using automated signals to help users have more context about the accounts they are interacting with. Contact: stech.bluesky.labeler@gmail.com Prof...
stechlab-labels.bsky.social
The nation’s largest police fleet of Tesla Cybertrucks is set to begin patrolling the streets of Las Vegas in November thanks to a donation from a U.S. tech billionaire, raising concerns about the blurring of lines between public and private interests.
Nation's largest fleet of police Cybertrucks to patrol Las Vegas
The nation’s largest police fleet of Tesla Cybertrucks is set to begin patrolling the streets of Las Vegas in November.
bit.ly

We are just getting started. Suggestions feedback and ideas welcome!
Again, this precise use case is why these systems exist-to proliferate people’s racist (transphobic, misogynistic…) imaginary at scale. This should not be seen as a “misuse” but rather the product being used exactly as intended.
Racist Influencers Using OpenAI's Sora to Make it Look Like Poor People Are Selling Food Stamps for Cash
Folks looking for evidence of SNAP recipients as welfare queens have no shortage of AI generated schlock to use as justification.
futurism.com

That's exactly how I imagined the bridge! (Well maybe since I live here and have seen such structures).

We're just getting started. Happy to talk to the team about the philosophy behind it, and the plan!

One could jokingly say some of the same people would also say they've been at Woodstock. Ironically, based on this survey, some of them could very possibly have!
Maybe it was way more than seven million? "8% of Americans say they participated in a No Kings protest on October 18."

p.s. Older people are really showing up.
Maybe it was way more than seven million? "8% of Americans say they participated in a No Kings protest on October 18."

p.s. Older people are really showing up.

That's awful. I'm truly sorry you had to receive a fake apology; that's incredibly disrespectful. You deserved better than that.

😇

Reposted by Joel Z. Leibo

Interestingly, @simondedeo.bsky.social uses exactly this context of apology as a place where people can use "Mental Proof" to overcome the perception of AI use, by *credibly* communicating intentions -- based on proof of shared knowledge and values.

ojs.aaai.org/index.php/AA...
Undermining Mental Proof: How AI Can Make Cooperation Harder by Making Thinking Easier | Proceedings of the AAAI Conference on Artificial Intelligence
ojs.aaai.org

AI-mediated communication -- apology edition:

www.nytimes.com/2025/10/29/u...

And yes, there's academic research too:

www.sciencedirect.com/science/arti...

#AIMC
Their Professors Caught Them Cheating. They Used A.I. to Apologize.
www.nytimes.com