Fabio Tollon
ftollon.bsky.social
Fabio Tollon
@ftollon.bsky.social
Postdoctoral Researcher in the Ethics of AI at the University of Edinburgh.
Carbohydrate enthusiast.
fabiotollon.wixsite.com/fabiotollon
Pinned
We did a thing! It's good! Read it!!!!😡
📢 New publication out! 📢

Co-authored with the brilliant @sj-bennett.bsky.social and @ftollon.bsky.social, as part of @ludovico-rella.bsky.social and @fabio-iapaolo.bsky.social SI on ‘Where’s the Intelligence in AI? Mattering, Placing, and De-individuating AI' link.springer.com/article/10.1...

🧵
Reposted by Fabio Tollon
BRAID researcher @ftollon.bsky.social's work features in the State of AI Ethics Report (SAIER) Vol. 7, “AI at the Crossroads: A Practitioner’s Guide to Community-Centered Solutions,” published by the Montreal AI Ethics Institute.
November 7, 2025 at 8:41 AM
Thrilled to have contributed to the State of AI Ethics Report Vol. 7, published by @mtlaiethics.bsky.social 🌍58 contributors, 17 chapters, 48 essays, one goal: move from principles to practice (check out Chapter 2 for my contribution)

Read: montrealethics.ai/saier-vol-7-...
SAIER Vol 7 landing | Montreal AI Ethics Institute
montrealethics.ai
November 4, 2025 at 5:26 PM
Excited to be heading to Amsterdam to present some work that @loehrgui.bsky.social and I have been sketching out on AI assertion.
We suggest that some AI chatbots might be answerable, thus rendering them capable of asserting. Strange stuff.

#philsky #AIethics

www.pt-ai.org/2025/program...
October 22, 2025 at 9:38 AM
Reposted by Fabio Tollon
Join us for our CTMF Flagship Lecture, where we will hear from Professor @jpsullins.bsky.social on the surprising role human wisdom is playing to help us navigate the challenges of AI technologies and create a more humane future.

🗓️ 15 October 18.00 - 19.30 BST
📍 EFI & online
🎟️ edin.ac/3Kl8em6
September 24, 2025 at 12:29 PM
Reposted by Fabio Tollon
Thrilled to be joining @ftollon.bsky.social at RSA House and online, 9th October at 6pm (UTC+1), with @taniaduarte.bsky.social to discuss Tollon’s recent work with @shannonvallor.bsky.social on the #ResponsibleAI landscape.

www.eventbrite.co.uk/e/the-respon...
September 19, 2025 at 8:06 PM
As if you haven't heard enough about AI - but this time it'll be good I promise
Thrilled to be joining @ftollon.bsky.social at RSA House and online, 9th October at 6pm (UTC+1), with @taniaduarte.bsky.social to discuss Tollon’s recent work with @shannonvallor.bsky.social on the #ResponsibleAI landscape.

www.eventbrite.co.uk/e/the-respon...
September 22, 2025 at 8:36 AM
Reposted by Fabio Tollon
One week on from the BRAID Community Gathering, and we’re still feeling the energy from a day filled with insight, authenticity and connection.

Heartfelt thanks to our speakers, attendees; Our Community - your being there made it what it was.
June 25, 2025 at 2:17 PM
Reposted by Fabio Tollon
Fantastic report by @ftollon.bsky.social and @shannonvallor.bsky.social , untangling the many faces of 'responsible AI'. Sharp, timely, and definitely worth reading! 👇👇

#philsky #philAI
Happy to share that our Landscape Study on Responsible AI has just been published: zenodo.org/records/1519...

If you can't be bothered to read the whole thing, we even summarized it for you: braiduk.org/the-responsi...

@braiduk.bsky.social @technomoralfutures.bsky.social
The Responsible AI Ecosystem: Seven Lessons from the BRAID Landscape Study
braiduk.org
June 19, 2025 at 1:12 PM
Happy to share that our Landscape Study on Responsible AI has just been published: zenodo.org/records/1519...

If you can't be bothered to read the whole thing, we even summarized it for you: braiduk.org/the-responsi...

@braiduk.bsky.social @technomoralfutures.bsky.social
The Responsible AI Ecosystem: Seven Lessons from the BRAID Landscape Study
braiduk.org
June 19, 2025 at 8:14 AM
Reposted by Fabio Tollon
Published today: The Responsible AI Ecosystem: A BRAID Landscape Study.
@ftollon.bsky.social & @shannonvallor.bsky.social introduce us to the report in the latest BRAID blog braiduk.org/the-responsi...
The Responsible AI Ecosystem: Seven Lessons from the BRAID Landscape Study
braiduk.org
June 18, 2025 at 3:34 PM
Reposted by Fabio Tollon
📢 New paper forthcoming in PPR on the connection between trust and risk

philpapers.org/rec/JOPTRA
Matthew Jope, Trust, Risk, and Mere Vulnerability - PhilPapers
Many philosophers of trust endorse the idea that trust is inherently risky. This raises the question of how exactly we ought to understand the relevant notion of risk. Should we understand ...
philpapers.org
June 11, 2025 at 10:26 AM
Reposted by Fabio Tollon
Happening today at 1pm (CST)
@ftollon.bsky.social visits Northern Illinois University to talk about his research in #AI and Responsibility.
March 21, 2025 at 12:49 PM
Very excited for this! If you happen to be around Illinois come and hang out.
"Responsibility Gaps and Emerging Technology"
Lecture by @ftollon.bsky.social of @edinburghuni.bsky.social This Friday (21 March) at 1pm. Sponsored by the Dept. of Communication at Northern Illinois University. Free and open to all.
March 16, 2025 at 4:30 PM
Reposted by Fabio Tollon
"Responsibility Gaps and Emerging Technology"
Lecture by @ftollon.bsky.social of @edinburghuni.bsky.social This Friday (21 March) at 1pm. Sponsored by the Dept. of Communication at Northern Illinois University. Free and open to all.
March 16, 2025 at 4:10 PM
Reposted by Fabio Tollon
Short answer: yes, emotions can sometimes be unfair to others; yes, those others can sometimes appropriately blame you for them; but blame can be resolved.

Long answer: https://www.taylorfrancis.com/books/oa-mono/10.4324/9781003498032/unfair-emotions-jonas-blatter
#philsky #philosophy
Unfair Emotions | Their Morality and Blameworthiness | Jonas Blatter |
This book provides a novel philosophical account of the unfairness of certain emotions. It explains how the concept of unfairness can be applied to emotions and
www.taylorfrancis.com
March 5, 2025 at 4:24 PM
Reposted by Fabio Tollon
Last week, a group of our BRAID researchers led by Dr Paula Westenberger & Dr Anna-Maria Sichani submitted a response to the UK govt #copyright & #AI consultation, advocating for the rights of creatives & transparency in AI development. Read the response: zenodo.org/records/1494... Well done team!
BRAID researchers' response to UK Government copyright and AI consultation
This response to the UK Government consultation on AI and Copyright was prepared by researchers in the Bridging Responsible AI Divides (BRAID) Programme. BRAID is a UK-wide programme dedicated to inte...
zenodo.org
March 4, 2025 at 12:17 PM
Reposted by Fabio Tollon
Earlier this month, we visited the University’s Easter Bush campus for a thought-provoking workshop on how data ethics applies across their key themes of Sustainable Agriculture, Infectious Diseases and Enhancing Health.

Read our blog post about the experience and key takeaways! ▶️ edin.ac/4hUL3Lh
February 27, 2025 at 2:24 PM
Reposted by Fabio Tollon
Thrilled to announce our new BRAID demonstrators. Two projects exploring responsible/equitable use of AI in the #CreativeSector. A third exploring AI’s #EnvironmentalImpact. Groundbreaking projects embedding Arts & Humanities alongside AI expertise. #ResponsibleAI edin.ac/3WNBzZY
Responsible AI projects to boost creative and environment sectors
Three BRAID Responsible AI Demonstrator projects will explore the uses of AI in live music and the arts as well as the environmental governance of AI.
edin.ac
February 5, 2025 at 12:19 PM
Reposted by Fabio Tollon
Suddenly, out of nowhere, a declassified World War II-era CIA guide to sabotaging fascism in the workplace has become one of the most popular free ebooks on the internet:

www.404media.co/declassified...
Declassified CIA Guide to Sabotaging Fascism Is Suddenly Viral
The World War II-era "Simple Sabotage Field Manual" is full of steps that office workers can take to resist leadership.
www.404media.co
January 29, 2025 at 8:53 PM
Reposted by Fabio Tollon
As BRAID researcher @ftollon.bsky.social says, they did a thing! New paper exploring the ways specific epistemic legacies and supply chains of labour shape AI practice & governance. edin.ac/4goqJAu @benedetta.bsky.social @sj-bennett.bsky.social
January 29, 2025 at 1:46 PM
Reposted by Fabio Tollon
Latest from the special issue I’m guest-editing with @ludovico-rella.bsky.social for AI & Society—a fantastic piece by @benedetta.bsky.social, @sj-bennett.bsky.social & @ftollon.bsky.social: “Everybody knows what a pothole is”: representations of work and intelligence in AI practice and governance
📢 New publication out! 📢

Co-authored with the brilliant @sj-bennett.bsky.social and @ftollon.bsky.social, as part of @ludovico-rella.bsky.social and @fabio-iapaolo.bsky.social SI on ‘Where’s the Intelligence in AI? Mattering, Placing, and De-individuating AI' link.springer.com/article/10.1...

🧵
January 27, 2025 at 2:01 PM
We did a thing! It's good! Read it!!!!😡
📢 New publication out! 📢

Co-authored with the brilliant @sj-bennett.bsky.social and @ftollon.bsky.social, as part of @ludovico-rella.bsky.social and @fabio-iapaolo.bsky.social SI on ‘Where’s the Intelligence in AI? Mattering, Placing, and De-individuating AI' link.springer.com/article/10.1...

🧵
January 27, 2025 at 1:54 PM
Reposted by Fabio Tollon
With the global push for AI innovation there's never been a better time for international collaboration on responsible AI. We start the new year with a #BRAID call for collaboration between UK and US institutions: www.ukri.org/opportunity/... #responsibleAI #funding
UK Research and Innovation
UKRI convenes, catalyses and invests in close collaboration with others to build a thriving, inclusive research and innovation system.
www.ukri.org
January 15, 2025 at 11:06 AM
AI will be "mainlined into UK's veins". Make it stop make all of it stop.

www.theguardian.com/politics/202...
‘Mainlined into UK’s veins’: Labour announces huge public rollout of AI
Plans to make UK world leader in AI sector include opening access to NHS and other public data
www.theguardian.com
January 13, 2025 at 6:59 AM
Reposted by Fabio Tollon
Stop Forcing A.I. into Fucking EVERYTHING!
December 24, 2024 at 3:11 AM