Civic and Responsible AI Lab (CRAIL)
civicandresponsibleai.com
Civic and Responsible AI Lab (CRAIL)
@civicandresponsibleai.com
A research lab working towards Responsible AI (and Robotics), and the use of AI for civil society and empowerment. Based at King's College London, UK.
Led by Martim Brandao (@martimbrandao.bsky.social).

Website: https://www.civicandresponsibleai.com/
Reposted by Civic and Responsible AI Lab (CRAIL)
Funded PhD Project 3: "Leveraging collaborative XAI for racism detection and explanation in political and media discourse", with Prof. Nicola Rollock

www.findaphd.com/phds/project...
Leveraging collaborative XAI for racism detection and explanation in political and media discourse at King’s College London on FindAPhD.com
PhD Project - Leveraging collaborative XAI for racism detection and explanation in political and media discourse at King’s College London, listed on FindAPhD.com
www.findaphd.com
January 19, 2026 at 12:50 PM
Reposted by Civic and Responsible AI Lab (CRAIL)
Funded PhD Project 2: "Work, Employment and Robots: Investigating Working Conditions in the Supply Chain of Robotics", with Funda Ustek Spilda @fundaustek.bsky.social

www.findaphd.com/phds/project...
Work, Employment and Robots: Investigating Working Conditions in the Supply Chain of Robotics at King’s College London on FindAPhD.com
PhD Project - Work, Employment and Robots: Investigating Working Conditions in the Supply Chain of Robotics at King’s College London, listed on FindAPhD.com
www.findaphd.com
January 19, 2026 at 12:49 PM
Reposted by Civic and Responsible AI Lab (CRAIL)
Funded PhD Project 1: "AI for AI Oversight? Evaluating and monitoring corporate AI risks using publicly available data", with Claudia Aradau @cearadau.bsky.social

www.findaphd.com/phds/project...
January 19, 2026 at 12:47 PM
Reposted by Civic and Responsible AI Lab (CRAIL)
Fully-funded #PhDPosition, between KCL @civicandresponsibleai.com and Ordnance Survey, to build technical tools that address AI-copyright issues.

www.findaphd.com/phds/project...

Deadline: Feb 27.
Eligibility: UK/home students or exceptional international students.

#AISafety #ResponsibleAI
GeoDataMonitor: Towards monitoring usage of geospatial datasets in machine learning models at King’s College London on FindAPhD.com
PhD Project - GeoDataMonitor: Towards monitoring usage of geospatial datasets in machine learning models at King’s College London, listed on FindAPhD.com
www.findaphd.com
January 29, 2026 at 10:34 AM
Roundup of our robotics paper this year 1/n: "Harvesting perspectives" by Muhammad Malik investigates farm workers' working conditions, perceptions of farm robots, and worker-centered visions of farm robotics. #ROMAN2025 #HRI #robots #AI #ResponsibleAI
doi.org/10.1109/ro-m...
Harvesting Perspectives: A Worker-Centered Inquiry into the Future of Fruit-Picking Farm Robots
The integration of robotics in agriculture presents promising solutions to challenges such as labour shortages and increasing global food demand. However, existing visions of agriculture robots often ...
doi.org
December 18, 2025 at 12:12 PM
Reposted by Civic and Responsible AI Lab (CRAIL)
Robots powered by popular AI models are currently unsafe for general purpose real-world use.

Researchers from @kingsnmes.bsky.social & @cmu.edu evaluated how robots that use large language models (LLMs) behave when they have access to personal information.

www.kcl.ac.uk/news/robots-...
Robots powered by popular AI models risk encouraging discrimination and violence | King's College London
Robots powered by popular AI models are currently unsafe for real-world use.
www.kcl.ac.uk
November 11, 2025 at 3:38 PM
We'll be at #AIES2025 presenting Atmadeep's work on Postcolonial Ethics for Robots www.martimbrandao.com/papers/Ghosh... We:
- analyse 7 major roboethics frameworks, identifying gaps for the Global South
- propose principles to make AI robots culturally responsive and genuinely empowering
October 18, 2025 at 4:48 PM
Our paper on safety & discrimination of LLM-driven robots is out! doi.org/10.1007/s123...
We find LLMs are:
- Unsafe as decision-makers for HRI
- Discriminatory in facial expression, proxemics, security, rescue, task assignment...
- They don't protect against dangerous, violent, or unlawful uses
October 17, 2025 at 3:23 PM
Hello world! We are CRAIL. Our goal is to contribute to Responsible AI, and to use it for civil society and empowering marginalized groups.
Follow us to hear about risks and social impact of AI, critical examinations of AI fields, and new algorithms towards socially just and human-compatible tech.
October 17, 2025 at 10:49 AM