The Internet Ethics program--Markkula Center for Applied Ethics
banner
iethics.bsky.social
The Internet Ethics program--Markkula Center for Applied Ethics
@iethics.bsky.social
Irina Raicu behind the keyboard.
Pinned
ICYMI: "When It’s About Power/Justice/Human Rights, It IS about [#tech and other] #Ethics": scu.edu/ethics/focus... #internet #law #AI
When It’s About Power/Justice/Human Rights, It IS about Ethics
There is nothing about applied ethics that inherently leaves out social-political matters.
scu.edu
Reposted by The Internet Ethics program--Markkula Center for Applied Ethics
"... OpenAI and others are starting to keep their strongest models in-house, enlisting #AI to build more capable versions of itself. As that shift happens, there are fewer public deployments that will highlight problems; auditing becomes the only real check": stevenadler.substack.com/p/dont-let-o...
Don't let OpenAI grade its own homework
OpenAI's compliance with California law is questionable. Someone else should be checking.
stevenadler.substack.com
February 13, 2026 at 8:21 PM
Reposted by The Internet Ethics program--Markkula Center for Applied Ethics
"... OpenAI and others are starting to keep their strongest models in-house, enlisting #AI to build more capable versions of itself. As that shift happens, there are fewer public deployments that will highlight problems; auditing becomes the only real check": stevenadler.substack.com/p/dont-let-o...
Don't let OpenAI grade its own homework
OpenAI's compliance with California law is questionable. Someone else should be checking.
stevenadler.substack.com
February 13, 2026 at 8:21 PM
Reposted by The Internet Ethics program--Markkula Center for Applied Ethics
One task "offered $10 for listening to a podcast episode with the RentAHuman founder and tweeting out an insight.... [T]he agent offering the bounty said it would attempt to suss out any bot-written responses using a program that detects AI-generated text." arstechnica.com/ai/2026/02/i... #ethics
I spent two days gigging at RentAHuman and didn't make a single cent
These bots supposedly need a human body to accomplish great things in meatspace.
arstechnica.com
February 13, 2026 at 9:00 PM
Reposted by The Internet Ethics program--Markkula Center for Applied Ethics
"The researcher identified age as a significant factor in the severity of the bias. Older participants demonstrated a stronger negative reaction to #AI #art. Younger audiences showed much weaker negative effects." www.psypost.org/bias-against... #ethics #tech
Bias against AI art is so deep it changes how viewers perceive color and brightness
New research suggests that labeling artwork as AI-created diminishes how viewers perceive its beauty and meaning. This bias appears to influence even basic visual processing.
www.psypost.org
February 13, 2026 at 9:02 PM
Reposted by The Internet Ethics program--Markkula Center for Applied Ethics
Reposted by The Internet Ethics program--Markkula Center for Applied Ethics
EU lawmakers found their government-issued devices were blocked from using the baked-in AI tools, amid fears that sensitive information could turn up on the U.S. servers of AI companies.
European Parliament blocks AI on lawmakers' devices, citing security risks | TechCrunch
EU lawmakers found their government-issued devices were blocked from using the baked-in AI tools, amid fears that sensitive information could turn up on the U.S. servers of AI companies.
techcrunch.com
February 17, 2026 at 4:34 PM
Reposted by The Internet Ethics program--Markkula Center for Applied Ethics
An internal research study at Meta found that parental supervision may not help teens regulate their social media, and teens with trauma are more inclined to overuse social media.
Meta's own research found parental supervision doesn't really help curb teens' compulsive social media use | TechCrunch
An internal research study at Meta found that parental supervision may not help teens regulate their social media, and teens with trauma are more inclined to overuse social media.
techcrunch.com
February 17, 2026 at 8:50 PM
"The Defense Department is threatening to blacklist #Anthropic over limits on military use, potentially putting one of its top contractors in a bind": www.fastcompany.com/91493997/pal... #ethics #AI #tech #gov #business #SiliconValley
Palantir is caught in the middle of a brewing fight between Anthropic and the Pentagon
The Defense Department is threatening to blacklist Anthropic over limits on military use, potentially putting one of its top contractors in a bind.
www.fastcompany.com
February 18, 2026 at 2:39 AM
"Flock devices have been installed by more than 100 public #school systems nationally, ... and audit logs ... show campus camera feeds are captured in a national database that police agencies across the country can access": theguardian.com/us-news/2026... #ethics #tech #education #privacy
Local police aid ICE by tapping school cameras amid Trump’s immigration crackdown
Local police assisted federal immigration agents by repeatedly searching school cameras that record license plate numbers, data show
theguardian.com
February 18, 2026 at 2:04 AM
"Despite the high bar for censoring online speech, lawsuits trace an escalating pattern of DHS increasingly targeting websites, app stores, and platforms—many that have been willing to remove content the #government dislikes": arstechnica.com/tech-policy/... #ethics #law #tech #contentmoderation
Platforms bend over backward to help DHS censor ICE critics, advocates say
Pam Bondi and Kristi Noem sued for coercing platforms into censoring ICE posts.
arstechnica.com
February 13, 2026 at 9:15 PM
Reposted by The Internet Ethics program--Markkula Center for Applied Ethics
7/ VPNs are in the 2020s what Antivirus was in the 2000s.

Something that, thanks to marketing, everyone was conditioned to believe was the first step in being secure online.

When if you talked to experts, the consensus view was: nope, not even close.

research.google/pubs/no-one-...
February 6, 2026 at 7:54 PM
Reposted by The Internet Ethics program--Markkula Center for Applied Ethics
A VPN company server was seized.

Reminder: your 'privacy' VPN is still a physical server in somebodies jurisdiction.

Claim of RAM disk servers as protection is...interesting.

I know nothing more about this case, but hotplugs that let authorities grab a server without cutting power are common. 1/
February 6, 2026 at 7:45 PM
Reposted by The Internet Ethics program--Markkula Center for Applied Ethics
A judge in 🇪🇸Barcelona has formally charged two former Civil Guard directors for the first time in connection with the Pegasus #spyware used to spy on the Catalan independence movement. The judge has also summoned former intel. agency CNI director to testify as a suspect.

en.ara.cat/politics/two...
Two Civil Guard officers charged for the first time in the Pegasus case
The judge also summons former CNI director Paz Esteban to testify as a suspect
en.ara.cat
February 9, 2026 at 6:42 PM
Reposted by The Internet Ethics program--Markkula Center for Applied Ethics
4 / What's the difference between Pegasus & Paragon's Graphite?

Well, while Pegasus is built around hacking the whole device, then gaining access to apps... Paragon frames their tech as 'light touch' and 'within app'...
February 11, 2026 at 8:56 PM
Reposted by The Internet Ethics program--Markkula Center for Applied Ethics
4/ If Ring were smart, they'd highlight privacy features, make them easier to use & introduce new ones.

Americans want more control of their privacy right now.

And in the longer term? Stop trying to build a surveillance dystopia consumers didn't ask for & focus on shipping good, private products.
February 13, 2026 at 1:45 AM
Reposted by The Internet Ethics program--Markkula Center for Applied Ethics
UPDATE: Ring just cancelled their partnership with #Flock.

Should you trust #ring now? No.

After all they thought this was a good idea.

And their statement doesn't acknowledge the real issue. More on that in a second.

But this shows: pressure works on privacy 1/
February 13, 2026 at 1:41 AM
"The researcher identified age as a significant factor in the severity of the bias. Older participants demonstrated a stronger negative reaction to #AI #art. Younger audiences showed much weaker negative effects." www.psypost.org/bias-against... #ethics #tech
Bias against AI art is so deep it changes how viewers perceive color and brightness
New research suggests that labeling artwork as AI-created diminishes how viewers perceive its beauty and meaning. This bias appears to influence even basic visual processing.
www.psypost.org
February 13, 2026 at 9:02 PM
One task "offered $10 for listening to a podcast episode with the RentAHuman founder and tweeting out an insight.... [T]he agent offering the bounty said it would attempt to suss out any bot-written responses using a program that detects AI-generated text." arstechnica.com/ai/2026/02/i... #ethics
I spent two days gigging at RentAHuman and didn't make a single cent
These bots supposedly need a human body to accomplish great things in meatspace.
arstechnica.com
February 13, 2026 at 9:00 PM
Reposted by The Internet Ethics program--Markkula Center for Applied Ethics
It’s depressing to see women try to claw back control of their images by imploring a chatbot not to misuse their photos.

Posts like these are reminiscent of the copyright declarations you used to see on Facebook. Totally understandable but unlikely to have the desired effect.
January 8, 2026 at 6:33 PM
Reposted by The Internet Ethics program--Markkula Center for Applied Ethics
Automated Trust & Safety (T&S) tools too often perform poorly in widely spoken, low-resource languages. CDT convened four roundtables with AI experts & data annotators to dig into why — and how to build tools that work equitably for all users, not just English speakers. cdt.org/insights/bet...
February 9, 2026 at 7:03 PM
Reposted by The Internet Ethics program--Markkula Center for Applied Ethics
"NLP experts articulated a need for #research that seeks to understand how #language evolves online, how users interact
with each other, and how network effects and digital affordances shape online activity." #ethics #internet #contentmoderation #tech #AI
Automated Trust & Safety (T&S) tools too often perform poorly in widely spoken, low-resource languages. CDT convened four roundtables with AI experts & data annotators to dig into why — and how to build tools that work equitably for all users, not just English speakers. cdt.org/insights/bet...
February 9, 2026 at 7:28 PM
"... OpenAI and others are starting to keep their strongest models in-house, enlisting #AI to build more capable versions of itself. As that shift happens, there are fewer public deployments that will highlight problems; auditing becomes the only real check": stevenadler.substack.com/p/dont-let-o...
Don't let OpenAI grade its own homework
OpenAI's compliance with California law is questionable. Someone else should be checking.
stevenadler.substack.com
February 13, 2026 at 8:21 PM
"... OpenAI and others are starting to keep their strongest models in-house, enlisting #AI to build more capable versions of itself. As that shift happens, there are fewer public deployments that will highlight problems; auditing becomes the only real check": stevenadler.substack.com/p/dont-let-o...
Don't let OpenAI grade its own homework
OpenAI's compliance with California law is questionable. Someone else should be checking.
stevenadler.substack.com
February 13, 2026 at 8:21 PM
"NLP experts articulated a need for #research that seeks to understand how #language evolves online, how users interact
with each other, and how network effects and digital affordances shape online activity." #ethics #internet #contentmoderation #tech #AI
Automated Trust & Safety (T&S) tools too often perform poorly in widely spoken, low-resource languages. CDT convened four roundtables with AI experts & data annotators to dig into why — and how to build tools that work equitably for all users, not just English speakers. cdt.org/insights/bet...
February 9, 2026 at 7:28 PM
Automated Trust & Safety (T&S) tools too often perform poorly in widely spoken, low-resource languages. CDT convened four roundtables with AI experts & data annotators to dig into why — and how to build tools that work equitably for all users, not just English speakers. cdt.org/insights/bet...
February 9, 2026 at 7:03 PM