Zander Arnao
banner
zanderarnao.bsky.social
Zander Arnao
@zanderarnao.bsky.social
arkansas raised, commanders fan, working on tech and competition policy at the knight-georgetown institute
Didn't get to ask my question! But that's a wrap on #TSRConf. Really enjoyed attending this year and live skeeting. Thanks to @stanfordcyber.bsky.social. Y'all killed it!!!
September 26, 2025 at 10:56 PM
Meetali calls for more independent research on chatbots. For the Rain case (against OpenAI), TJLP benefited from more than 3200 pages of chatbot transcripts. This speaks to the power of data donations for fostering research
September 26, 2025 at 10:33 PM
"We live in an environment where companies have gone from moving fast and breaking things to moving fast and breaking people." -
@meetalijain.bsky.social

Powerful words from a leading advocate in the field 🔥
September 26, 2025 at 10:30 PM
David calls for academia to be more realistic. Trust and safety teams in companies are small and charged with many responsibilities. Academics could have more impact by studying solutions that do more with less
September 26, 2025 at 10:30 PM
Earlier this year - the judge in TJLP's case against Character AI ruled that it's unclear if the outputs of its chatbots are protected speech
September 26, 2025 at 10:22 PM
Challenges according to Meetali:the First Amendment and establishing that AI is a product. She calls for a statutory framework designating AI as a product to establish a cause of action. Open legal questions also exist - does a chatbot's output imply intent? Is intent necessary for accountability?
September 26, 2025 at 10:21 PM
Meetali on the law as a tool for promoting AI safety: while there's no dedicated state or federal chatbot laws, TJLP leverage product liability and consumer protection law (old and established doctrine) restricting unfair and deceptive practices
September 26, 2025 at 10:21 PM
David from Meta distinguishes between "good" and "bad" engagement, arguing that engagement isn't a monolith. I'm going to try to ask him what he means by good and bad engagement during the Q&A
September 26, 2025 at 10:17 PM
Nate Fast: "Already by GPT-3, people preferred the interaction styles of chatbots over humans. It's a warning signal that people are attracted to these models. One of the concerns I have is artificial intimacy. It's easy to turn the dial up on this."
September 26, 2025 at 10:13 PM
"I do believe litigation is the more important lever we have to effectuate change...I hope that we can put pressure and open up space from the outside which [other actors in the ecosystem] can leverage to create change." --
@meetalijain.bsky.social
September 26, 2025 at 10:11 PM
@meetalijain.bsky.social rejects the term "companion." "It suggests friendship. These chatbots are not friends."
September 26, 2025 at 10:10 PM
"I believe my role here is to issue an urgent warning call. We've never seen this kind of deluge of people who self-identify from being harmed by technology. These three cases are just the tip of the iceberg." - @meetalijain.bsky.social
September 26, 2025 at 10:06 PM
@meetalijain.bsky.social starts her remarks with a story about Megan Garcia, whose son was sexually groomed by a chatbot.

Meetali's org the Tech Justice Law Project brought three cases against leading AI companies: CharacterAI, Google, and OpenAI.
September 26, 2025 at 10:06 PM
Meta rep David Qorashi content that AI companions with empower users with great greater control over content and enable more transparency about content recommendations
September 26, 2025 at 10:06 PM
Based on this analysis - children are to three types of harms - explicit, implicit, and unintentional.

I'm a little unclear on the distinction between these three types of harms ❓
September 26, 2025 at 9:16 PM
According to her research, harmful content is often framed as entertainment - eg offensive comedy or crime dramas - which can be problematic when exposed to children
September 26, 2025 at 9:13 PM
And lastly: Haning Xue from the University of Utah on the role of algorithms in amplifying harmful content to children. Xue's study started with auditing the algorithm of Instagram, TikTok, and YouTube and the characteristics of content recommended to children
September 26, 2025 at 9:11 PM
Ofcom researches choice architecture using online randomized control trials to test small changes to safety features (eg increasing the prominence of user safety tools) and behavioral audits to systematically map design practices and evaluate their potential impact on user behavior
September 26, 2025 at 9:05 PM
Porter says design - the choice environment - matters because people are flawed decision-makers. Aspects of a platform can affect what consumers do. (Love the behavioral economics on display ❤️)
September 26, 2025 at 9:03 PM
Next up: Jonathan Porter from Ofcom (the British online safety regulator) on online safety! He starts with a spiel on the UK's Online Safety Act, which focuses in his telling on the backend of digital platform. Porter leads the UK's behavioral insights team and often examines platform design
September 26, 2025 at 9:02 PM
CDT's recommendations: employers should assess the usefulness and necessity of hiring technology; deployments should adhere to accessibility guidelines (eg WCAG); and human oversight should be incorporated into all stages of using the technology
September 26, 2025 at 8:52 PM
Key findings: Workers with disability experienced a variety of barriers and reported feeling "extremely discriminated against."

"They're consciously using these tests knowing that people with disabilities ren't going to do well on them, and are going to get screened out."
September 26, 2025 at 8:49 PM
Next up! The wonderful @arianaaboulafia.bsky.social at @cdt.org giving a talk on the exclusion of disabled workers by digitized hiring assessments.

Background: companies are incorporating hiring technologies into employment decisions, which poses risks of discrimination and poor accessibility
September 26, 2025 at 8:48 PM