Wil
banner
metroai.co
Wil
@metroai.co
Spatial technologist preserving memory and designing just futures

🎓 MIT DUSP ’25 | 🌐 Thesis: Daufuskie3D.org
Preserving Gullah Geechee heritage with immersive tools
✍🏽 Author, People-Powered Gen AI: A Playbook for Civic Engagement
🧠 Founder @ metroai.co
Reposted by Wil
We also created an open repository of the AIAs, which we hope will encourage additional research on them. The repository is available here: osf.io/rk8ux/
An Analysis of Canada's Algorithmic Impact Assessments
Hosted on the Open Science Framework
osf.io
August 5, 2025 at 11:35 PM
Reposted by Wil
Unfortunately the paper is not open access. Send me a message or email if you cannot access it and would like a copy!

link.springer.com/article/10.1...
Design versus reality: assessing the results and compliance of algorithmic impact assessments - Digital Society
Algorithmic impact assessments (AIAs) have become a dominant regulatory instrument in governing artificial intelligence (AI). While there are noteworthy examples across the global north, Canada’s AIA is considered to be the best practice worldwide. When AIAs are studied, evaluations have been based on the assessment of the instrument and not the examination of their answers. We examine Canada’s published AIAs. We report five findings: (1) Uneven compliance is observed in the completion of AIAs; (2) Reasons for automation legitimize efficiency and innovation narratives; (3) Impacts and trade-offs are framed as non-existent, positive and undermine harms; (4) Civil society organizations are non-existent in AIAs; and (5) Accountability is framed as processual mitigation of AI impacts. Despite the promise of AIAs for accountability of AI systems, our results reveal a “design-reality” gap between literature and practice. We observed that any negative impacts were framed positively; input was not elicited from the public; and an over-emphasis of self-regulation conformed to organizational procedures instead of investigating outcomes. Although submission is mandatory, its processual accountability failed to ensure compliance. We recommend strengthening accountability to include civil society, formalizing harms instead of emphasizing impacts or risks and blending processual accountability with outcomes of AI systems.
link.springer.com
August 5, 2025 at 11:35 PM
This fr the worst, I’m sorry for you.
I hope it was put up so you can bring it tomorrow (or eat it when you get home)
July 8, 2025 at 1:45 PM
Reposted by Wil
Here's Hakeem Jeffries:
June 29, 2025 at 5:01 PM
Oooh, bet then I’m 100% for it.
June 14, 2025 at 6:37 PM
Leaning towards closed for now. Would that require abandoning the blacksky portion of the blueskyapp? I see a value in maintaining both.
June 13, 2025 at 7:05 PM