Mark Graham
banner
geoplace.bsky.social
Mark Graham
@geoplace.bsky.social

Prof at the Oxford Internet Institute

Director of @towardsfairwork.bsky.social

Publications: www.markgraham.space

Studies: Digital Economies, Digital Geographies, Economic Geography, Gig Economy, Data Work, AI Production Networks, Cities

Eat the rich .. more

Business 23%
Political science 15%

Reposted by Mark Graham

Reposted by Mark Graham

THE SILICON GAZE

After reading a really interesting paper from @oii.ox.ac.uk (link below), I asked ChatGPT (version 5.2) to give an ranking of countries by IQ, 'extrapolating' and 'estimating' where data was not available.

I then asked it to provide an 'approximate' heat map of the estimates

1/2

Reposted by Mark Graham

New coverage from The Times on research co-authored by Prof. @geoplace.bsky.social highlights how ChatGPT demonstrates biased outputs, reflecting long-standing inequalities embedded in AI training data.

Read more: www.thetimes.com/uk/technolog...
‘Biased’ AI says Cambridge is harder-working than boozy Oxford
Conclusion highlights the limitations of large-language models, according to the researchers, who asked ChatGPT for one-word answers which gave ‘binary’ results
www.thetimes.com

Reposted by Mark Graham

New coverage from @euronews.com on Prof. @geoplace.bsky.social's research, which finds that answers from OpenAI’s ChatGPT favour wealthy, Western countries and sideline much of the Global South.

Read more:

www.euronews.com/next/2026/01...
OpenAI’s ChatGPT has a Western bias, study finds
ChatGPT’s viewpoints are shaped by the predominantly Western, white, male developers and platform owners who built it, a study finds.
www.euronews.com

Reposted by Mark Graham

'The silicon gaze: A typology of biases and inequality in LLMs through the lens of place'.

Develops "a five-part typology of bias (availability, pattern, averaging, trope, and proxy) that accounts for the complex ways in which LLMs privilege certain places while rendering others invisible."

Reposted by Mark Graham

Researchers FranciscKerche, Matthew Zook and @geoplace.bsky.social show how bias emerges in ChatGPT outputs. For example, responses to queries rank Ipanema, Leblon and Lagoa as having the happiest people compared to Complexo do Alemão, Complexo da Maré and Rio Comprido as the unhappiest. 2/4

Reposted by Mark Graham

Researchers Francisco Kerche, Prof Matthew Zook and @geoplace.bsky.social find that ChatGPT reproduces global biases. For example, responses rank Brighton, London and Bristol as having the sexiest people in the UK whilst Grimsby, Accrington and Barnsley are rated lowest. More: bit.ly/4bF4K9B

Reposted by Mark Graham

New study from @oii.ox.ac.uk and the University of Kentucky sheds light on how bias manifests in ChatGPT outputs. For example, London boroughs Bloomsbury, Hampstead and the City of London are rated as having the smartest people with Croydon, Tottenham and Hillingdon rated the lowest. 1/2

Reposted by Mark Graham

“ChatGPT isn't an accurate representation of the world. It rather just reflects and repeats the enormous biases within its training data” @geoplace.bsky.social @oii.ox.ac.uk speaking to @dailymail.co.uk about his new co-authored study with University of Kentucky. www.dailymail.co.uk/sciencetech/...
AI 'reveals' the most racist towns in the UK - Burnley tops list
When asked which UK towns and cities are the most racist, ChatGPT claims that Burnley tops the list. This is followed by Bradford, Belfast, Middlesbrough, Barnsley, and Blackburn.
www.dailymail.co.uk

Reposted by Mark Graham

New @oii.ox.ac.uk and University of Kentucky study shows how ChatGPT amplifies global inequalities, with LLMs reflecting historic biases in training data. With thanks to @telegraph.co.uk for sharing the study. @geoplace.bsky.social
www.telegraph.co.uk/business/202...
AI thinks these are the most racist places in the UK
ChatGPT answers often repeat negative stereotypes and reinforce prejudices, study shows
www.telegraph.co.uk

Reposted by Mark Graham

The team has created a public website inequalities.ai where anyone can explore how ChatGPT rates countries, cities and neighbourhoods across a range of lifestyle indicators including food, culture and quality of life. 3/4

Reposted by Mark Graham

Researchers Francisco Kerche, Matt Zook and @geoplace.bsky.social find responses generated by ChatGPT consistently rate wealthier, western regions as ‘better’, smarter’, ‘happier’ and ‘more innovative’. 2/4
News alert! New study from @oii.ox.ac.uk and the University of Kentucky finds that ChatGPT amplifies global inequalities. Researchers find that large language models reflect historic biases in the data sets they learn from whilst shaping how people see the world. More here: bit.ly/4bF4K9B 1/4

Place is not a neutral category in AI systems. Our findings show how historical and institutional patterns of documentation become legible as common sense in LLM outputs.

You can explore all of our data and create your own maps at inequalities.ai

One recurring issue is the use of proxies: quantifiable stand-ins (rankings, lists, awards) used to answer questions that are not straightforwardly measurable. This tends to advantage already-visible places.

We used forced-choice prompts to elicit comparative judgements about places. This makes latent preferences and stereotypes easier to detect than in open-ended responses.

The paper develops a typology of five recurrent biases in LLM place representations: availability, pattern, averaging, trope, and proxy. The maps illustrate how these surface across regions.

A large share of place-based answers in LLMs appear to be shaped by uneven visibility in the underlying data. This is particularly evident for places that are sparsely documented online.

We introduce the term “silicon gaze” to describe patterned inequalities in how LLMs represent place. The paper sets out a typology and maps the resulting spatial distributions.

Reposted by Markus Heße

Our new paper audits ChatGPT’s place-based judgements using 20 million pairwise comparisons. We find systematic geographic biases in how places are described and evaluated.

journals.sagepub.com/doi/10.1177/... (authors: Francisco W. Kerche, Matthew Zook, Mark Graham)

Reposted by Mark Graham

The OII's Prof. @geoplace.bsky.social contributed to POST UK's report on AI and employment, which considers the factors driving adoption and the issues that might make this challenging.

Read the full report here: post.parliament.uk/research-bri...
Artificial intelligence (AI) and employment
Artificial intelligence (AI) is becoming more common in UK workplaces. How is it being used, and what are the impacts on job opportunities and working conditions?
post.parliament.uk

Reposted by Jérôme Denis

Workers powering the AI industry face terrible conditions, but they shouldn’t have to.

Interview with me in Yahoo News: www.yahoo.com/news/article...
Workers powering the AI industry face terrible conditions, but they shouldn’t have to – interview
Mark Graham, founder of the Fairwork initiative, notes that most of the human labour in the AI supply chain is data work in low-income countries done under poor conditions.
www.yahoo.com

Fairwork’s AI Supply Chain Assessment: Appen report is now LIVE.

- 15 changes were implemented by Appen during the assessment period.

- The report also highlights areas for further progress, including pay, worker protections, and transparency.

Read the full report here: fair.work/en/fw/public...

If AI is going to be fair, its supply chains have to be too. That’s why we’ve launched Fairwork Certification, working with lead firms to push higher standards all the way down their chains. Details here:

fair.work/wp-content/u...

A new @towardsfairwork.bsky.social assessment of Sama is out.

It looks at the people doing the invisible data work that keeps AI running for companies in sectors from driverless cars to online retail.

Read the scorecard and our report here:

🔗 fair.work/en/fw/public...

Reposted by Mark Graham

KI klingt nach Zukunft – aber sie lebt von menschlicher, unsichtbarer Arbeit. Ich habe für @freitag.de mit @geoplace.bsky.social über sein Buch „Feeding the Machine“ gesprochen – und darüber, was KI wirklich kostet. Leseempfehlung für alle, die hinter die Kulissen der "KI-Unternehmen" schauen wollen
„Die Arbeitsbedingungen sind brutal“: Über die geheimen Malocher hinter ChatGPT
Hinter jedem KI-Bild steckt Handarbeit: Menschen in Kenia, Indien oder auf den Philippinen schuften stundenlang für Hungerlöhne, um Maschinen klüger zu machen. Im Gespräch enthüllt Mark Graham die uns...
www.freitag.de

I’ve got a new chapter out with Adam Badger, Alessio Bertolini, Fabian Ferrari & Funda Ustek Spilda in the forthcoming Handbook of Labour Geography. In it, we unpack the Fairwork action-research method.

Read: www.elgaronline.com/edcollchap/b...
www.elgaronline.com

Reposted by Mark Graham

We see only the polished face of AI- but behind it lies a web of people, data & decisions shaping who gains & who’s left behind. Join @geoplace.bsky.social @towardsfairwork.bsky.social @oii.ox.ac.uk to explore fairer AI
📅 29 Oct 📍Museum of Oxford 🔗 www.socsci.ox.ac.uk/unveiling-th... #ESRCFestival
Unveiling the human behind AI: A Rubik's cube journey to Fair Work
29 October, late morning - 4pm, Museum of Oxford
www.socsci.ox.ac.uk

Friends and colleagues in Oxford, join us for this event at the end of the month.

festivalofsocialscience.com/events/unvei...