Leading the Civic and Responsible AI Lab @civicandresponsibleai.com
www.findaphd.com/phds/project...
Deadline: Feb 27.
Eligibility: UK/home students or exceptional international students.
#AISafety #ResponsibleAI
www.findaphd.com/phds/project...
Deadline: Feb 27.
Eligibility: UK/home students or exceptional international students.
#AISafety #ResponsibleAI
It’s on their website. That’s how comfortable fascists are in the UK today.
It’s on their website. That’s how comfortable fascists are in the UK today.
Here is their reply.
🧵 1/n
Here is their reply.
🧵 1/n
www.findaphd.com/phds/program...
I'm involved in 3 of the projects, links below.
www.findaphd.com/phds/program...
I'm involved in 3 of the projects, links below.
#HRI2025 #robots #bias #ResponsibleAI
doi.org/10.1109/HRI6...
#HRI2025 #robots #bias #ResponsibleAI
doi.org/10.1109/HRI6...
#ROMAN2025 #HRI #robots
doi.org/10.1109/ro-m...
#ROMAN2025 #HRI #robots
doi.org/10.1109/ro-m...
#ROMAN2025 #HRI #robots #AI #XAI
doi.org/10.1109/ro-m...
#ROMAN2025 #HRI #robots #AI #XAI
doi.org/10.1109/ro-m...
doi.org/10.1109/ro-m...
doi.org/10.1109/ro-m...
OpenAI’s said it was “acceptable” for a robot to wield a kitchen knife to intimidate workers in an office & to take non-consensual photographs of a person in the shower."
OpenAI’s said it was “acceptable” for a robot to wield a kitchen knife to intimidate workers in an office & to take non-consensual photographs of a person in the shower."
www.cnnbrasil.com.br/tecnologia/r...
For more info check our paper: doi.org/10.1007/s123...
www.cnnbrasil.com.br/tecnologia/r...
For more info check our paper: doi.org/10.1007/s123...
Researchers from @kingsnmes.bsky.social & @cmu.edu evaluated how robots that use large language models (LLMs) behave when they have access to personal information.
www.kcl.ac.uk/news/robots-...
- analyse 7 major roboethics frameworks, identifying gaps for the Global South
- propose principles to make AI robots culturally responsive and genuinely empowering
- analyse 7 major roboethics frameworks, identifying gaps for the Global South
- propose principles to make AI robots culturally responsive and genuinely empowering
doi.org/10.1007/s123...
We find LLMs are:
- Unsafe as decision-makers for HRI
- Discriminatory in facial expression, proxemics, security, rescue, task assignment...
- They don't protect against dangerous, violent, or unlawful uses
doi.org/10.1007/s123...
doi.org/10.5281/zeno...
We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n
doi.org/10.5281/zeno...
We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n
Across 13 studies, people were more likely to request cheating when instructing machines—and AI agents complied far more often than humans. Co-first authored by ARC's Zoe Rahwan.
www.nature.com/articles/s41...
Across 13 studies, people were more likely to request cheating when instructing machines—and AI agents complied far more often than humans. Co-first authored by ARC's Zoe Rahwan.
www.nature.com/articles/s41...
This is way way worse even than the NYT article makes it out to be
OpenAI absolutely deserves to be run out of business
Work with @zejinlu.bsky.social @sushrutthorat.bsky.social and Radek Cichy
arxiv.org/abs/2507.03168
Work with @zejinlu.bsky.social @sushrutthorat.bsky.social and Radek Cichy
arxiv.org/abs/2507.03168