Jim Dilkes
jimdilkes.bsky.social
Jim Dilkes
@jimdilkes.bsky.social
AI PhD at University of Southampton
high-risk AI deployment, and argue thaht technical solutions alone cannot resolve the complex sociotechnical challenges of trust in consequential algorithmic systems.

We hope you will join us next week for this exciting talk!

bit.ly/ccais-seminar
CCAIS Seminar Series - Schedule
Upcoming Talks
bit.ly
November 6, 2025 at 11:39 AM
These results challenge prevailing assumptions about explainable AI's efficacy in high-risk contexts and raise fundamental questions about deploying such systems in controversial domains. Dr. Mehrotra will discuss implications for XAI design, propose policy recommendations for
November 6, 2025 at 11:39 AM
predictive policing. Our findings reveal a troubling disconnect: hybrid explanations increased subjective trust among experts but did not improve decision-making accuracy. Critically, no explanation format successfully established appropriate trust in either user group.
November 6, 2025 at 11:39 AM
formats remains unclear, especially in ethically contentious applications.

This talk presents an empirical study examining how explanation modality (text, visual, and hybrid) and user expertise (retired police officers versus lay users) influence trust calibration in AI-based
November 6, 2025 at 11:39 AM
As AI systems proliferate in high-risk domains, users often exhibit trust miscalibration—either under-trusting capable systems or over-trusting flawed ones. hile explainable AI promises to help users calibrate trust appropriately, the effectiveness of different explanation
November 6, 2025 at 11:39 AM
📅 Date: Thursday, 09 October 2025
⏰ Time: 13:00 BST / 14:00 CET
🔗Link: bit.ly/ccais-seminar

Abstract:
November 6, 2025 at 11:39 AM
The methodology of fine-tuning agents with intrinsic rewards offers a more transparent alternative to the currently dominant alignment methods (e.g., RLHF), and can enable self-improving / automated alignment without reliance on external data.
October 2, 2025 at 1:58 PM
...likely to converge on undesirable equilibria – for example, mutual defection in Prisoner’s dilemma-like situations.

In this talk, Dr. Tennant will present a novel RL-based methodology for the moral alignment of agentic AI systems.
October 2, 2025 at 1:58 PM
Abstract:

As AI systems become increasingly more agentic, the need to align their decision-making with human values and preferences grows in importance. Evidence suggests that without such alignment, purely self-interested agents interacting in multi-agent environments are...
October 2, 2025 at 1:58 PM
In this talk, Dr. Radulescu will sketch a vision for human-AI collectives, where humans and artificial agents cooperate to solve such complex challenges.

🔗 Register: uva-live.zoom.us/meeting/regi...
Welcome! You are invited to join a meeting: CCAIS Seminar Series. After registering, you will receive a confirmation email about joining the meeting.
Welcome! You are invited to join a meeting: CCAIS Seminar Series. After registering, you will receive a confirmation email about joining the meeting.
uva-live.zoom.us
September 4, 2025 at 1:14 PM
Multi-objective reinforcement learning (MORL) offers a more robust and adaptable solution by optimizing for a vector of rewards—such as fairness, diversity, and ethical norms. The resulting behaviors support transparency, explainability, and human alignment.
September 4, 2025 at 1:14 PM
Reinforcement learning is becoming a pivotal tool in designing solutions for such domains. However, traditional RL, which uses a single scalar reward, is insufficient for this vision.
September 4, 2025 at 1:14 PM
Abstract:

Most complex problems of social relevance—such as climate change mitigation, taxation policy design, or traffic management—involve multiple stakeholders and conflicting objectives. These problems are multi-agent and multi-objective by nature.
September 4, 2025 at 1:14 PM
Dr. Albers will discuss using reinforcement learning, informed by psychology, to create long term-effective behavior change support in contexts such as smoking cessation and physical activity coaching.
🧵4/4

🔗 Register: tiny.cc/ccais-semina...
Welcome! You are invited to join a meeting: CCAIS Seminar Series. After registering, you will receive a confirmation email about joining the meeting.
Welcome! You are invited to join a meeting: CCAIS Seminar Series. After registering, you will receive a confirmation email about joining the meeting.
tiny.cc
May 1, 2025 at 1:11 PM
Personalizing the support these apps provide by accounting for people's current and future states – such as motivation or knowledge -- might increase their effectiveness, especially in the long run.
🧵3/4
May 1, 2025 at 1:11 PM
Abstract:

eHealth applications for behavior change have shown promise in helping people change behaviors such as smoking, physical inactivity, or unhealthy eating. However, many people quickly stop using these applications.
🧵2/4
May 1, 2025 at 1:11 PM
"Social AI and Multi-Agent Systems at Bristol: Towards Ethical Sociotechnical Systems"

Learn how the lab builds intelligent agents and multi-agent systems designed to align with societal norms and promote equity in AI.

Sign up: tiny.cc/ccais-semina...
https://tiny.cc/ccais-seminar-zoom
t.co
April 14, 2025 at 5:09 PM