Navya
tnotherthoughts.bsky.social
Navya
@tnotherthoughts.bsky.social
Law+Policy @StanfordLaw, @JindalGLS. Rambling about tech, tea and AI Gov in the majority world.

Palo Alto/India.
AI governance may benefit from fewer obvious observations and more focus on structuring effective incentives. What frameworks have truly worked in practice? What can motivate stakeholders to engage meaningfully beyond the initial flurry of consultations, panels, and conferences?
February 10, 2025 at 11:52 PM
...because without the right incentives, consultation is just another box to tick. Real stakeholder engagement happens when participation isn't just an obligation but a strategic advantage - when it aligns with business incentives, mitigates risk, or creates long-term value.
February 10, 2025 at 11:47 PM
So instead of repeating that consultation is necessary, should we maybe ask: How do we incentivize stakeholders who control vast amounts of data, to participate in governance structures like exchanges, trusts, and other frameworks?
February 10, 2025 at 11:46 PM
Policymaking already moves at a snail’s pace. The last thing we need is to waste time on dwelling on performative "engagement" that doesn’t change anything. How do we make stakeholder engagement worthwhile?
February 10, 2025 at 11:44 PM
A few years ago, I was trying to pilot a data exchange in Hyderabad, India. Private companies loved the idea...right up until they actually had to, you know...participate in one.
February 10, 2025 at 11:41 PM
I’ve tried facilitating this earlier - it’s tough when incentives are misaligned. Everyone says they want collaboration, but when it comes to action? The incentives to not participate often outweigh the benefits.
February 10, 2025 at 11:37 PM
It’s easy to point out that stakeholders aren’t talking to each other. But AI is already miles ahead of us in diagnosing problems. The real challenge? Creating the right incentives for engagement (or harmonization, if we’re feeling fancy) and actually moving towards solutions.
February 10, 2025 at 11:36 PM
Every time I see "we need multi-stakeholder consultation for AI regulation," I'm curious: who needs to be convinced even now? We all agree that stakeholder consultation is important. The real question is: How do we make it happen?
February 10, 2025 at 11:30 PM
Reposted by Navya
Contrary to common beliefs, we find that misinformation isn’t universal or a general condition of our media ecosystem.

Instead, it's specifically associated with radical-right populist parties that spread misinformation as a political strategy. 5/
January 14, 2025 at 1:24 PM
Reposted by Navya
5. Most of Meta's US-based moderation staff was already in TX, so unless they also move the core product teams who work on trust & safety, this is no change. If those teams are staying in CA, this indicates Meta wants to position T&S as separate from core product mechanics-- a dangerous delusion!
January 7, 2025 at 5:43 PM
Reposted by Navya
In my view, that doesn't make much sense from any principled point of view. If you were primarily concerned with "free speech" you'd go *further* than they have, while if you were only concerned with ensuring you don't interfere with legislative debates you'd go *much less far*. 🧵 5/6
January 12, 2025 at 7:14 PM