HAI Fellow @ Stanford focusing on risk & safety 🖇️ 🦜
Their “Scientist AI” proposal allows us to disable agentic and planning components—building in off-switches from the start.
📄 arxiv.org/abs/2405.20009 #bluesky
Their “Scientist AI” proposal allows us to disable agentic and planning components—building in off-switches from the start.
📄 arxiv.org/abs/2405.20009 #bluesky
- AIAAIC (www.aiaaic.org/aiaaic-repos...) and
- MIT's AI Incident Tracker (airisk.mit.edu/ai-incident-...).
Pretty shocking to see the numbers on autonomous vehicle incidents. Very few of these reach the headlines.
- AIAAIC (www.aiaaic.org/aiaaic-repos...) and
- MIT's AI Incident Tracker (airisk.mit.edu/ai-incident-...).
Pretty shocking to see the numbers on autonomous vehicle incidents. Very few of these reach the headlines.
AI safety needs tools to track compound harm.
📑 arxiv.org/abs/2401.07836
#TechEthics #bluesky
AI safety needs tools to track compound harm.
📑 arxiv.org/abs/2401.07836
#TechEthics #bluesky
To what extent is this happening in practice?
📄 arxiv.org/abs/2305.15324
To what extent is this happening in practice?
📄 arxiv.org/abs/2305.15324
📄 arxiv.org/abs/2401.07836
📄 arxiv.org/abs/2401.07836