Mark Wong
@markwong.bsky.social
Senior Lecturer/Assoc. Professor & Joint Subject Group Lead, Social and Urban Policy uofglasgow.bsky.social | racial #bias & racism in #AI | Racial justice | Co-design data, AI, games | Digital, Responsible AI, #anti-racism policy | climate action. He/him
So sorry to see this @abeba.bsky.social. This censorship/complicity is unacceptable and the stress you were put under was inexcusable. AI for good only for the few and business-as-usual. Your critical work is absolutely incredible and speaks truth against big tech power. In solidarity & support.
July 11, 2025 at 8:15 PM
So sorry to see this @abeba.bsky.social. This censorship/complicity is unacceptable and the stress you were put under was inexcusable. AI for good only for the few and business-as-usual. Your critical work is absolutely incredible and speaks truth against big tech power. In solidarity & support.
Reposted by Mark Wong
A short blogpost detailing my experience of censorship at the AI for Good Summit with links to both original and censored versions of slides and links to my talk
aial.ie/blog/2025-ai...
aial.ie/blog/2025-ai...
AI for Good [Appearance?]
Reflections on the last minute censorship of my keynote at the AI for Good Summit 2025
aial.ie
July 11, 2025 at 2:01 PM
A short blogpost detailing my experience of censorship at the AI for Good Summit with links to both original and censored versions of slides and links to my talk
aial.ie/blog/2025-ai...
aial.ie/blog/2025-ai...
Reposted by Mark Wong
Our own @markwong.bsky.social - elected as Editorial Board Member for the Journal of Social Policy - is Senior Lecturer with expertise in data & AI policies, racial justice & participatory methodologies. Mark aims to tap his global networks to elevate JSP as a top venue for debate on policy & AI 🙌
July 11, 2025 at 9:28 AM
Our own @markwong.bsky.social - elected as Editorial Board Member for the Journal of Social Policy - is Senior Lecturer with expertise in data & AI policies, racial justice & participatory methodologies. Mark aims to tap his global networks to elevate JSP as a top venue for debate on policy & AI 🙌
See more details and resources signposted in the blog post: *UK Government going full steam ahead with AI but left the people behind* www.gla.ac.uk/research/az/...
@uofgussp.bsky.social @uofglasgow.bsky.social @uofgsocsci.bsky.social @uofgnews.bsky.social @ukri.org
@uofgussp.bsky.social @uofglasgow.bsky.social @uofgsocsci.bsky.social @uofgnews.bsky.social @ukri.org
March 24, 2025 at 5:22 PM
See more details and resources signposted in the blog post: *UK Government going full steam ahead with AI but left the people behind* www.gla.ac.uk/research/az/...
@uofgussp.bsky.social @uofglasgow.bsky.social @uofgsocsci.bsky.social @uofgnews.bsky.social @ukri.org
@uofgussp.bsky.social @uofglasgow.bsky.social @uofgsocsci.bsky.social @uofgnews.bsky.social @ukri.org
What we need is to involve the public in AI governance.
This will allow participation of diverse perspectives to determine and audit how AI should or should not be used in government. See more: what we are doing in the Participatory Harm Auditing Workbenches and Methodologies (PHAWM) project.
This will allow participation of diverse perspectives to determine and audit how AI should or should not be used in government. See more: what we are doing in the Participatory Harm Auditing Workbenches and Methodologies (PHAWM) project.
March 24, 2025 at 5:22 PM
What we need is to involve the public in AI governance.
This will allow participation of diverse perspectives to determine and audit how AI should or should not be used in government. See more: what we are doing in the Participatory Harm Auditing Workbenches and Methodologies (PHAWM) project.
This will allow participation of diverse perspectives to determine and audit how AI should or should not be used in government. See more: what we are doing in the Participatory Harm Auditing Workbenches and Methodologies (PHAWM) project.
Co-design methods, e.g. people’s panels, ensure the lived expertise of adversely-racialised people are valued and listened to in the AI ecosystem. This echoes @demos-uk.bsky.social call for government to shift from ‘citizen engagement to citizen participation’ to mobilise mission-led government.
March 24, 2025 at 5:22 PM
Co-design methods, e.g. people’s panels, ensure the lived expertise of adversely-racialised people are valued and listened to in the AI ecosystem. This echoes @demos-uk.bsky.social call for government to shift from ‘citizen engagement to citizen participation’ to mobilise mission-led government.
Research I’ve led at University of Glasgow shows preventing inequalities in digital services and AI requires involving the public. Our co-created code of practice provides an example of how the government can develop digital services in more equitable ways. (see links in blog)
March 24, 2025 at 5:22 PM
Research I’ve led at University of Glasgow shows preventing inequalities in digital services and AI requires involving the public. Our co-created code of practice provides an example of how the government can develop digital services in more equitable ways. (see links in blog)
Policies need to ensure AI is fair and beneficial for everyone before it gets further rolled out in government departments. UK government needs to involve the public to decide how and why AI is used in #publicsector. #ResponsibleAI is about considering who is most impacted & rebalance who has power.
LinkedIn
This link will take you to a page that’s not on LinkedIn
lnkd.in
March 24, 2025 at 5:22 PM
Policies need to ensure AI is fair and beneficial for everyone before it gets further rolled out in government departments. UK government needs to involve the public to decide how and why AI is used in #publicsector. #ResponsibleAI is about considering who is most impacted & rebalance who has power.
This work is done as part of the @UKRI_News
funded project, 'Protecting Minoritised Ethnic Community Online' (thanks to the UKRI strategic priorities fund & REPHRAIN). We want to thank everyone, the team, project partners, participants contributed to this work. 8
funded project, 'Protecting Minoritised Ethnic Community Online' (thanks to the UKRI strategic priorities fund & REPHRAIN). We want to thank everyone, the team, project partners, participants contributed to this work. 8
November 20, 2024 at 12:28 PM
This work is done as part of the @UKRI_News
funded project, 'Protecting Minoritised Ethnic Community Online' (thanks to the UKRI strategic priorities fund & REPHRAIN). We want to thank everyone, the team, project partners, participants contributed to this work. 8
funded project, 'Protecting Minoritised Ethnic Community Online' (thanks to the UKRI strategic priorities fund & REPHRAIN). We want to thank everyone, the team, project partners, participants contributed to this work. 8
Our paper contributes to the growing debates on the importance of centering the role #marginalised communities play in data and AI and amplify the voice of those most impacted. Read our article to find out more about how #codesign is important for trustworthy services 7/n
November 20, 2024 at 12:28 PM
Our paper contributes to the growing debates on the importance of centering the role #marginalised communities play in data and AI and amplify the voice of those most impacted. Read our article to find out more about how #codesign is important for trustworthy services 7/n
Our evidence reveals nuanced realties of emotions, frustration, and hopes that racialised people have towards making digital services fairer and trustworthy. The article highlights co-design as a desired path by racialised peoples towards realising change and justice. 6/n
November 20, 2024 at 12:28 PM
Our evidence reveals nuanced realties of emotions, frustration, and hopes that racialised people have towards making digital services fairer and trustworthy. The article highlights co-design as a desired path by racialised peoples towards realising change and justice. 6/n
Ample evidence in #criticalAI studies has revealed #AIharms on racialised people. AI models cause harm by transmitting discrimination, toxicity, misinformation, and negative stereotypes. what is lesser known is how people makes sense of and navigate these systems and harms. 5/n
November 20, 2024 at 12:28 PM
Ample evidence in #criticalAI studies has revealed #AIharms on racialised people. AI models cause harm by transmitting discrimination, toxicity, misinformation, and negative stereotypes. what is lesser known is how people makes sense of and navigate these systems and harms. 5/n
What we found was issues related to trust, data privacy, and poorer quality access to services. Such experiences are shaped by the fears and lived experience of racism. We outline our case for a co-design approach to guide public and private sectors’ decision-making and #policy 4
November 20, 2024 at 12:28 PM
What we found was issues related to trust, data privacy, and poorer quality access to services. Such experiences are shaped by the fears and lived experience of racism. We outline our case for a co-design approach to guide public and private sectors’ decision-making and #policy 4
We argue it is imperative to understand, and value, racialised minorities’ #livedexperience to inform and improve digital services’ design. We drew on qualitative interviews and workshops with people who identify as a minoritised ethnic individuals across England and Scotland. 3
November 20, 2024 at 12:28 PM
We argue it is imperative to understand, and value, racialised minorities’ #livedexperience to inform and improve digital services’ design. We drew on qualitative interviews and workshops with people who identify as a minoritised ethnic individuals across England and Scotland. 3
@AunamQuyoum
and I discussed the vulnerabilities minoritised ethnic people face in datafication processes & how they are racialised within data/algorithmic systems. The pace of change in policy and innovation remains slow, while #AI and #datadriven discrimination is rife. 2/n
and I discussed the vulnerabilities minoritised ethnic people face in datafication processes & how they are racialised within data/algorithmic systems. The pace of change in policy and innovation remains slow, while #AI and #datadriven discrimination is rife. 2/n
November 20, 2024 at 12:28 PM
@AunamQuyoum
and I discussed the vulnerabilities minoritised ethnic people face in datafication processes & how they are racialised within data/algorithmic systems. The pace of change in policy and innovation remains slow, while #AI and #datadriven discrimination is rife. 2/n
and I discussed the vulnerabilities minoritised ethnic people face in datafication processes & how they are racialised within data/algorithmic systems. The pace of change in policy and innovation remains slow, while #AI and #datadriven discrimination is rife. 2/n