Matti Minkkinen
banner
minkkinen.bsky.social
Matti Minkkinen
@minkkinen.bsky.social
Postdoctoral Researcher (University of Turku), dad, amateur musician. Technologies and how we humans learn to live with them. AI governance and responsible AI.
I really want to follow a ton of people just so it looks like I’m following them. Then I don’t actually ever read any of their posts. I just read the posts of a handful of people posting on weird niche topics.
November 19, 2024 at 4:56 PM
Clarification here that AI auditing means auditing of AI systems, not auditing using AI.
November 19, 2024 at 6:16 AM
Reposted by Matti Minkkinen
Generative AI has been a thing in 2024. We’ve published a comprehensive analysis of its key legal challenges within in the EU law. Our report covers liability, privacy, intellectual property, and cybersecurity. @floridi.bsky.social
Link: www.sciencedirect.com/science/arti...
Generative AI in EU law: Liability, privacy, intellectual property, and cybersecurity
The complexity and emergent autonomy of Generative AI systems introduce challenges in predictability and legal compliance. This paper analyses some of…
www.sciencedirect.com
November 18, 2024 at 11:52 AM
Reposted by Matti Minkkinen
What governance model does the AI Act propose, and how are its enforcement responsibilities distributed among national and supranational bodies? We explore these questions in this article in the EJRR.
@jessrmorley.bsky.social @floridi.bsky.social

Link: www.cambridge.org/core/journal...
A Robust Governance for the AI Act: AI Office, AI Board, Scientific Panel, and National Authorities | European Journal of Risk Regulation | Cambridge Core
A Robust Governance for the AI Act: AI Office, AI Board, Scientific Panel, and National Authorities
www.cambridge.org
November 18, 2024 at 11:52 AM
Reposted by Matti Minkkinen
What risks does the AI Act prioritize, and which risk assessment methodology best aligns with it and EU legal values? We propose a semi-quantitative approach here: @floridi.bsky.social
Link: link.springer.com/article/10.1...
AI Risk Assessment: A Scenario-Based, Proportional Methodology for the AI Act - Digital Society
The EU Artificial Intelligence Act (AIA) defines four risk categories for AI systems: unacceptable, high, limited, and minimal. However, it lacks a clear methodology for the assessment of these risks ...
link.springer.com
November 18, 2024 at 11:52 AM
Reposted by Matti Minkkinen
Our article on the Accountability of AI is now published in AI and Society. We explore what accountability means in the context of AI services and argue that its meaning varies based on governance goals.
Link: link.springer.com/article/10.1...
Accountability in artificial intelligence: what it is and how it works - AI & SOCIETY
Accountability is a cornerstone of the governance of artificial intelligence (AI). However, it is often defined too imprecisely because its multifaceted nature and the sociotechnical structure of AI s...
link.springer.com
November 18, 2024 at 11:52 AM
I’ve recently been looking at the idea of humans in the loop losing their unique human knowledge presented here: misq.umn.edu/will-humans-...

To be honest, I’m not good with the formulas in the paper, but the argument is interesting.
Will Humans-in-the-Loop Become Borgs? Merits and Pitfalls of Working with AI (Open Access)
misq.umn.edu
September 12, 2024 at 5:15 PM
Tuttuja nimiä ennakointipiireistä näemmä myös näistä puhumassa: Sofi Kurki ja Kamilla Karhunmaa. Linkki erilaisiin tulevaisuushorisontteihin on kyllä tässä selvä.
September 11, 2024 at 8:35 AM
Jos on olemassa toksista positiivisuutta, niin voikohan olla myös toksista toivoa?
September 11, 2024 at 8:03 AM