AI Accountability Lab
banner
aial.ie
AI Accountability Lab
@aial.ie
Trinity College Dublin’s Artificial Intelligence Accountability Lab (https://aial.ie/) is founded & led by Dr Abeba Birhane. The lab studies AI technologies & their downstream societal impact with the aim of fostering a greater ecology of AI accountability
Pinned
The AIAL is looking for a highly driven Post-Doctoral Researcher who can design and implement research that improves transparency and accountability regarding the use of generative AI in public services

Application closes on Dec 02, 2025

More information: aial.ie/hiring/postd...
Reposted by AI Accountability Lab
New from #ADAPTRadio: Agentic AI- Convenience or Compromise?

Listen to a fireside chat with Dr. Abeba Birhane @abeba.bsky.social (@aial.ie @tcddublin.bsky.social) & Meredith Whittaker @meredithmeredith.bsky.social (President @signal.org) from #ADVANCE2025

Listen: www.adaptcentre.ie/podcasts/ada...
November 13, 2025 at 10:45 AM
Reposted by AI Accountability Lab
Our colleagues at the AI Accountability Lab (@aial.ie) in Trinity College Dublin are hiring a Postdoctoral Researcher on Generative AI (GenAI) in Public Service.

👉 Learn more and apply here: aial.ie/hiring/postd...

#AIResearch #GenerativeAI #PublicService #AIAL #ResponsibleAI
November 12, 2025 at 2:38 PM
Reposted by AI Accountability Lab
📣 I am hiring a postdoc! aial.ie/hiring/postd...

applications from suitable candidates that are passionate about investigating the use of genAI in public service operations with the aim of keeping governments transparent and accountable are welcome

pls share with your networks
October 30, 2025 at 7:51 PM
The AIAL is looking for a highly driven Post-Doctoral Researcher who can design and implement research that improves transparency and accountability regarding the use of generative AI in public services

Application closes on Dec 02, 2025

More information: aial.ie/hiring/postd...
October 30, 2025 at 7:44 PM
Reposted by AI Accountability Lab
I really enjoyed talking AI "companions" (and a lot more) with Seth and Caroline - you can catch it on Spotify, too - open.spotify.com/episode/6S0r...
October 27, 2025 at 10:51 AM
Reposted by AI Accountability Lab
Can AI friendship go too far?

This week on the pod: Maribeth Rauh (formerly of Google DeepMind, now at AI Accountability Lab @aial.ie ) on why we treat AI like people — and how that illusion can spiral into real-world harm.

🎧 youtu.be/uAaYSIPsxCI

#AI #Chatbots #MentalHealth #Podcast #AIEthics
October 25, 2025 at 3:58 PM
Reposted by AI Accountability Lab
Why do we treat AI like it’s human?
This week, Maribeth Rauh (ex–Google DeepMind, now at the AI Accountability Lab @aial.ie ) joins us to unpack how AI companions use emotional design and manipulative UX to keep us hooked.

🎧 Listen: youtu.be/uAaYSIPsxCI?...

#AIethics #AIAccountability
October 24, 2025 at 6:26 PM
Reposted by AI Accountability Lab
this is a comprehensive conversation that covers current issues of AI companions: dark patterns, deceptive designs, emerging youth mental health issues, engagement maximising business models, regulation (lack thereof), and how we are evaluating companion apps at the @aial
⚖️ No Rules for AI?

AI is evolving faster than our laws. Maribeth Rauh (ex–Google DeepMind, now at AI Accountability Lab @aial.ie ) explains how weak regulation—and seeing AI as “human”—can lead to real harm.

🎧 Listen now: youtu.be/uAaYSIPsxCI?...

#AI #TechPolicy #Podcast #AIRegulation #Ethics
October 24, 2025 at 1:26 PM
Reposted by AI Accountability Lab
⚖️ No Rules for AI?

AI is evolving faster than our laws. Maribeth Rauh (ex–Google DeepMind, now at AI Accountability Lab @aial.ie ) explains how weak regulation—and seeing AI as “human”—can lead to real harm.

🎧 Listen now: youtu.be/uAaYSIPsxCI?...

#AI #TechPolicy #Podcast #AIRegulation #Ethics
October 23, 2025 at 1:02 PM
Reposted by AI Accountability Lab
curious about the current state of AI companions, how they are used, various problems that arise w developing intimate r\ship w these bots & how we are approaching evaluation of companions at @aial.ie.. then this indepth conversation w @mbrauh.bsky.social is for you

www.youtube.com/watch?v=uAaY...
October 22, 2025 at 11:15 AM
Conversation with our own @mbrauh.bsky.social encompassing her transition from DeepMind to @aial.ie, breaking down AI companions, why the massive uptakes (particularly w minors), & the multitude of harms that arise with them including AI sycophancy & the business models designed to keep users hooked
We talk to AI like it’s human — because it talks back.

Maribeth Rauh (DeepMind → AI Accountability Lab) @aial.ie on why AI’s conversations feel real — and why that illusion matters.

🎧 youtu.be/uAaYSIPsxCI?...

#TheInternetIsCrack #Ethics #AI #DeepMind #AIAL #podcast
October 22, 2025 at 11:07 AM
Reposted by AI Accountability Lab
Looking forward to the fireside chat on #AgenticAI with Dr. Abeba Birhane @abeba.bsky.social (@aial.ie, @tcddublin.bsky.social) & Meredith Whittaker @meredithmeredith.bsky.social (President, @signal.org) at #ADVANCE2025 tomorrow. Agenda here: www.adaptcentre.ie/news-and-eve...
@researchireland.ie
October 15, 2025 at 3:27 PM
Reposted by AI Accountability Lab
The Advisory Council is composed by:

➖ Dr Abeba Birhane @abeba.bsky.social, Founder & Leader of @aial.ie

➖ Professor Joris van Hoboken @jvh.bsky.social, Professor of Information Law at @ivir-uva.bsky.social

➖ Professor Clare McGlynn @claremcglynn.bsky.social, Professor of Law at @durham.ac.uk
October 16, 2025 at 9:58 AM
Reposted by AI Accountability Lab
📢 Today, CDT Europe is proud to announce the creation of its new Advisory Council, an exciting milestone for our organisation as we strengthen our work at the intersection of #technology, policy, and democracy in Europe.

👇🏻 Read more on our website: cdt.org/insights/cdt...
CDT Europe Announces Inaugural Advisory Council
The Advisory Council Membersto announce the creation of its new Advisory Council, an exciting milestone for our organisation as we strengthen our work at the intersection of technology, policy, and de...
cdt.org
October 16, 2025 at 9:58 AM
"[T]he ability of Bytedance’s models to create likenesses of copyrighted characters and real people unfortunately adds heat to the fire of scrambling to get ahead at any cost, and regardless of any kind of law or ethical implications.” @mbrauh.bsky.social

time.com/7321911/byte...
ByteDance’s AI Videos Are Scary Realistic. That’s a Problem for Truth Online.
ByteDance’s new AI visual models rival those from OpenAI and Google. But their spread raises concerns over deepfakes and copyright.
time.com
October 4, 2025 at 4:36 PM
Reposted by AI Accountability Lab
If passed, the CSA Regulation proposal would also harm whistleblowers, activists in political opposition, labour unions, people seeking abortions in places where it is criminalised, media freedom, marginalised groups & many others

please sign this petition & pass it on crm.edri.org/stop-scannin...
Children deserve a secure and safe internet | EDRi CiviCRM
crm.edri.org
September 25, 2025 at 5:22 PM
Reposted by AI Accountability Lab
📢 Together with 30 CSOs, we sent a letter to the European Commission and MSs to urge them to uphold their commitment to swiftly implement the #AI Act, without any delay or reopening of the legislation, and appoint national competent authorities.

👇🏻 Read the full statement: cdt.org/insights/joi...
Joint CSOs Open Letter on Keeping the AI Act National Implementation on Track
CDT Europe, together with 30 other civil society organisations, wrote an open letter to the European Commission and Member States to express our concerns regarding the timely implementation of the AI ...
cdt.org
September 24, 2025 at 8:44 AM
Reposted by AI Accountability Lab
"Belief that AI integration is essential—if not the only path—to societal progress is deeply flawed. Claims about AI’s capabilities lack evidence or are simply overinflated. Extractive, & destructive nature of the industry is often ignored" www.alliancemagazine.org/analysis/phi... from your truly
Philanthropy contingent upon AI adoption is regressive. Let's look at the facts - Alliance magazine
It’s common to hear about the ‘transformative powers’ of artificial intelligence (AI) across sectors. From education, and healthcare, to law, medicine, and the humanitarian sector, claims about the ‘p...
www.alliancemagazine.org
September 3, 2025 at 6:40 PM
Reposted by AI Accountability Lab
AI is the wrong tool to tackle complex societal & systemic problems. AI4SG is more about PR victories, boosting AI adoption (regardless of merit/usefulness) & laundering accountability for harmful tech, extractive practices, abetting atrocities. yours truly
www.project-syndicate.org/magazine/ai-...
The False Promise of “AI for Social Good”
Abeba Birhane refutes industry claims about the technology's potential to solve complex social problems.
www.project-syndicate.org
September 15, 2025 at 8:10 PM
Reposted by AI Accountability Lab
In this short piece, I lean on embodied cog sci to argue that we should refuse & resist llms in education (pp. 53-58) unesdoc.unesco.org/in/documentV...

"the classroom is an environment where love, trust, empathy, care & humility are fostered & mutually cultivated through dialogical interactions"
September 7, 2025 at 10:06 AM
Reposted by AI Accountability Lab
we at @aial.ie, are investigating amplification/censorship on X/twitter in EU. what are major EU:
-politicians & regulators
-journalists
-influencers & public intellectuals, with high visibility/influence on X, across both left & right political ideologies

we're interested in existing datasets too
July 23, 2025 at 12:56 PM
Reposted by AI Accountability Lab
Amazing group of AI auditing scholars and practitioners at the AI Accountability Lab @aial.ie, led by @abeba.bsky.social
We are thrilled to finally introduce our lab members aial.ie/people/

1/
July 10, 2025 at 6:38 PM
We are thrilled to finally introduce our lab members aial.ie/people/

1/
July 10, 2025 at 6:18 PM
Reposted by AI Accountability Lab
We call on the Commission to refrain from pursuing a deregulation agenda and champion the proper enforcement and implementation of the AI Act and the wider EU #digital rulebook.
July 9, 2025 at 9:10 AM
Reposted by AI Accountability Lab
We firmly oppose any attempt to delay or re-open the #AIAct, particularly in light of the growing trend of deregulation, which risks undermining key accountability mechanisms & hard-won rights, enshrined in EU law, protecting people, the planet, justice and #democracy.
July 9, 2025 at 9:10 AM