Aidan Kierans
aidankierans.bsky.social
Aidan Kierans
@aidankierans.bsky.social
PhD student at the University of Connecticut researching AI alignment, safety, and governance.
More info about me here: https://aidankierans.github.io/
Pinned
🤖 Calling all philosophers and AI researchers!
Our team at @uconn.bsky.social's RIET Lab is hosting a virtual workshop on Machine Ethics and Reasoning (MERe) on July 18, 2025.
We're bringing together philosophy PhDs, CS researchers & AI folks to advance computational approaches to moral reasoning 🧵
forms.gle
🤖 Calling all philosophers and AI researchers!
Our team at @uconn.bsky.social's RIET Lab is hosting a virtual workshop on Machine Ethics and Reasoning (MERe) on July 18, 2025.
We're bringing together philosophy PhDs, CS researchers & AI folks to advance computational approaches to moral reasoning 🧵
forms.gle
July 1, 2025 at 3:08 PM
Reposted by Aidan Kierans
The Singapore Consensus is on arXiv now -- arxiv.org/abs/2506.20702

It offers:
1. An overview of consensus technical AI safety priorities
2. An example of widespread international collab & agreement
The Singapore Consensus on Global AI Safety Research Priorities
Rapidly improving AI capabilities and autonomy hold significant promise of transformation, but are also driving vigorous debate on how to ensure that AI is safe, i.e., trustworthy, reliable, and secur...
arxiv.org
June 27, 2025 at 10:24 PM
Reposted by Aidan Kierans
Come see our poster during the AI Alignment Track on Friday the 28th - 12:30pm! arxiv.org/abs/2406.042...
Quantifying Misalignment Between Agents
Growing concerns about the AI alignment problem have emerged in recent years, with previous work focusing mainly on (1) qualitative descriptions of the alignment problem; (2) attempting to align AI ac...
arxiv.org
February 26, 2025 at 2:33 PM