Felix M. Simon
banner
felixsimon.bsky.social
Felix M. Simon
@felixsimon.bsky.social
Research Fellow in AI and News, Reuters Institute, Oxford University | Research Associate & PhD, Oxford Internet Institute | AI, news, (mis)info, democracy | Affiliate Tow Center, CITAP | Media advisor | My views etc…

https://www.felixsimon.net/
Thank you, Hannes :)
November 8, 2025 at 11:12 AM
Feedback welcome, especially on the theoretical section and the discussion as well as literature that we will have missed! So feel free to plug your own or other people’s material, all of which will be appreciated as Liz and I work towards a journal submission.

🔗Pre-print: buff.ly/ObXx74j
November 7, 2025 at 4:46 PM
We are very grateful to the team at the Financial Times, particularly @mattgarrahan.bsky.social, for supporting this study from the outset – and to the participants from the FT who volunteered their precious time to help us in understanding this issue.
November 7, 2025 at 4:46 PM
Although this is a single‑organisation case study, we think there are some “no‑regret” principles that can be useful to other organisations:
November 7, 2025 at 4:46 PM
Crucially, we argue that AI transparency is best seen as a spectrum: optimising one factor (e.g. maximum disclosure) can undermine others (e.g. perceived trust or revenue). There does not seem to be a one‑size‑fits‑all rule; instead transparency must adapt to org context, audiences and technology.
November 7, 2025 at 4:45 PM
Internally, managerial and commercial logics push for efficient adoption and risk management; externally, professional journalism ethics and commercial imperatives drive an aim to remain trustworthy.
November 7, 2025 at 4:45 PM
For those of you more academically interested in this, we argue that AI transparency at the FT is shaped by isomorphic pressures – regulations, peer practices and audience expectations – and by intersecting institutional logics.
November 7, 2025 at 4:45 PM
Intriguing here is also the question of how much longer AI transparency will be required, especially with a view to the actions of tech companies.
November 7, 2025 at 4:45 PM
4️⃣ Persistent challenges include achieving consistent labelling (especially on mobile), breaking organisational silos, keeping pace with evolving models and norms, guarding against creeping human over‑reliance, and mitigating against “transparency backfire” where disclosures reduce trust.
November 7, 2025 at 4:45 PM
3️⃣ Nine factors shape what, when & how the FT discloses AI use. These include legal/provider requirements, industry benchmarking, the degree of human oversight, the nature of the task, system novelty, audience expectations & research, perceived risk, commercial sensitivities and design constraints.
November 7, 2025 at 4:45 PM
No‑human‑in‑the‑loop features (e.g. Ask FT) get prominent warnings, whereas AI‑assisted, journalist‑edited outputs (e.g. bullet‑point summaries) get lighter labelling.
November 7, 2025 at 4:45 PM
2️⃣ Disclosure is calibrated to context. Internally, full disclosure aims to reduce frictions and surfaces errors early; externally, labels are scaled with autonomy and oversight.
November 7, 2025 at 4:45 PM
1️⃣ AI‑transparency ≠ a binary. At the FT it’s a hybrid of policy, process and practice. Senior leadership sets explicit principles, cross‑functional panels vet new applications, and AI use is signposted in internal/external tools and reinforced through training.
November 7, 2025 at 4:45 PM
Link to the pre-print here, summary following below.

🔗Pre-print: buff.ly/ObXx74j
November 7, 2025 at 4:45 PM
The way I read this chart is that people encounter more information about politics and a broader range of views, with a skew towards information that disagrees with one’s own political views — not sure it follows that this is generally bad or causally related to the unravelling of democracy?
November 7, 2025 at 12:02 PM
I look forward to spending time with the brilliant minds across disciplines at Corpus, having already met some of them over the last weeks. And if you are in town, come visit.
November 7, 2025 at 11:01 AM
A big thank you to @gcapoccia.bsky.social, Linda Eggert, and @mitalilive.bsky.social for supporting my application earlier this year.
November 7, 2025 at 11:01 AM
…something I look forward to, because the study of AI and information benefits from inputs that go beyond the usual suspects in computer science and political communication.
November 7, 2025 at 11:01 AM
Such fellowships are one of the idiosyncrasies of this place, but apart from the occasional college meal, they give you the opportunity to meet scholars and students from a range of different disciplines and get out of your disciplinary wheelhouse…
November 7, 2025 at 11:01 AM
Haha I know and you are right to be sceptical — but it’s something I and colleagues have seen come up in qualitative research, too, where people weren’t primed in this way. Doesn’t dictate that copyright law is the right way to address this but I’d be careful to assume that publics don’t care
November 6, 2025 at 11:05 AM