+ risks to users (like skills fade),
+ unsafe use (eg under-trust of AI outputs, something that can be seen in the evaluation),
+ & new failure modes (hey, could Scottish govt end up relying on AI tools built by UK govt to do policy work? can't see how that could go wrong...)
+ risks to users (like skills fade),
+ unsafe use (eg under-trust of AI outputs, something that can be seen in the evaluation),
+ & new failure modes (hey, could Scottish govt end up relying on AI tools built by UK govt to do policy work? can't see how that could go wrong...)