Luca Bertuzzi
@lucabertuzzi.bsky.social
Senior AI Correspondent at MLex. Award-winning reporter focused on digital policy & EU affairs. Ex Euractiv. Bylines at la Repubblica & Tagesspiegel.
The idea is to put the obligation instead on the Commission and member states to foster AI literacy, whatever that means.
November 6, 2025 at 1:55 PM
The idea is to put the obligation instead on the Commission and member states to foster AI literacy, whatever that means.
That is one interpretation of recital 106 that is likely to be challenged in court.
October 20, 2025 at 3:02 PM
That is one interpretation of recital 106 that is likely to be challenged in court.
More broadly, the reports hint at a paradigm shift:
from measuring capacity to evaluating a model’s propensity to cause harm. They also outline a graduated compliance system. If confirmed, these approaches could reshape how GPAI obligations are applied under the AI Act. 4/4
from measuring capacity to evaluating a model’s propensity to cause harm. They also outline a graduated compliance system. If confirmed, these approaches could reshape how GPAI obligations are applied under the AI Act. 4/4
October 16, 2025 at 8:53 AM
More broadly, the reports hint at a paradigm shift:
from measuring capacity to evaluating a model’s propensity to cause harm. They also outline a graduated compliance system. If confirmed, these approaches could reshape how GPAI obligations are applied under the AI Act. 4/4
from measuring capacity to evaluating a model’s propensity to cause harm. They also outline a graduated compliance system. If confirmed, these approaches could reshape how GPAI obligations are applied under the AI Act. 4/4
Beyond the much-debated compute threshold, the researchers identify three other factors to classify GPAI models with “systemic risk”: safety benchmarks, reach & high-impact capabilities. They suggest regulators should set clear thresholds in these areas. 3/4
October 16, 2025 at 8:53 AM
Beyond the much-debated compute threshold, the researchers identify three other factors to classify GPAI models with “systemic risk”: safety benchmarks, reach & high-impact capabilities. They suggest regulators should set clear thresholds in these areas. 3/4
The studies go deep into how to technically define and assess GPAI models. One proposal borrows from cognitive psychology to determine whether an AI system is “general-purpose.” Another offers technical criteria to decide when a model is so altered it becomes a “new” one. 2/4
October 16, 2025 at 8:53 AM
The studies go deep into how to technically define and assess GPAI models. One proposal borrows from cognitive psychology to determine whether an AI system is “general-purpose.” Another offers technical criteria to decide when a model is so altered it becomes a “new” one. 2/4