Zack Grant
banner
zackgp94.bsky.social
Zack Grant
@zackgp94.bsky.social
Postdoctoral Researcher at the Nuffield Politics Research Centre (Uni of Oxford).

I research public opinion and party competition in Europe. Especially interested in political implications of AI + tech change; ethnic fragmentation; the environment.
See link above for full working paper if interested…Would love to hear feedback if you have it: zack.grant (@)nuffield.ox.ac.uk . If in Oxford, follow @mhaslberger.bsky.social for details about an upcoming conference on the politics of AI, held at Nuffield College in November.
October 28, 2025 at 11:35 AM
Supports research on broader societal considerations about impact on prices, healthcare, democracy, others' employment etc. While exposed workers a flashpoint, perhaps they're just at forefront of public awareness about potential for AI to reshape society/econ in many ways beyond just own job. 15/16
October 28, 2025 at 11:35 AM
But, personal AI job fears not the sole/main factor. a) % support for regulation > the % feeling negatively exposed; b) support for reg > opposition even among personal AI-optimists; c) subjective exposure doesn't mediate entire link between regulation demand & objective workplace exposure. 14/16.
October 28, 2025 at 11:35 AM
Overall, paper suggests high demand for gov. AI regulation that could rise further as workplace rollout continues, as even ‘AI-winners’ are not partic. libertarian. The politics may be different from past techno-shocks, as the labour market consequences are not confined to routine workers. 13/16
October 28, 2025 at 11:35 AM
We also look at small sample of U.S. workers. Essentially no effects of AI exposure on demand for regulation in full U.S. sample, BUT higher support among liberals expecting personal harm. No such interaction in Britain. Makes sense: support for ‘big gov’ generally more widespread in Europe. 12/16
October 28, 2025 at 11:35 AM
Panel model looks at new/updated exposure experiences on regulation support, controlling for prior attitudes/exposure. Suggest a short-term (reactive?) adjustment in demand for AI reg. in response to perceiving greater threats at work. So potential for demand for gov reg. to rise in future… 11/16
October 28, 2025 at 11:35 AM
Link between AI regulation support and obj. workplace exposure + subj. AI-pessimism robust to controls. Trad. automation (RTI) exposure, while widely used in polsci, does not predict attitudes. Note obj. AI exposure still predictive net of ‘subjective’ beliefs about personal AI cost/benefits… 10/16
October 28, 2025 at 11:35 AM
Among those *feeling* pos / neg exposed, an interesting asymmetry. While net support highest among those saying AI worsens own job prospects (+59), there is no diff. between those expecting no impact (+38) vs. those expecting benefits (+41). Even AI optimists do not oppose idea of gov. reg. 9/16
October 28, 2025 at 11:35 AM
But how important is personal AI job exposure? First, let’s look at support by quartiles of ‘objective’ AI exposure. Net support rises from around +31 to +51 points from least (Q1) to most (Q4) exposed. Not much techno-libertarianism among those at AI forefront. Again, robust to D/P controls. 8/16
October 28, 2025 at 11:35 AM
How popular is gov. regulation of AI in general? Quite! If we look at *all* British workers, net support (support minus opposition) rose from +35 to +43 points between Oct 2024 and Spring 2025 — though many people remain ambivalent/undecided. 7/16.
October 28, 2025 at 11:35 AM
Workers aren’t clueless about likely AI impacts either. Use Felten et al. (2021) to map exposure of 364 jobs (from budget analysts to dancers), we find that ~50% of those in the most AI-exposed jobs (Q4) think it’ll affect them v 1/5 in least-exposed (Q1). Robust to demog/polit (D/P) controls. 6/16
October 28, 2025 at 11:35 AM
First: how do current workers *feel* that AI will affect them? Interestingly, while nearly half of workers think that it won’t make a difference, rising numbers feel personally threatened by AI (from 23 to 28%, 2024-2025). Personal AI-optimists are a much smaller group (1-in-10 in all waves). 5/16
October 28, 2025 at 11:35 AM
Ofc, AI could raise hiring/earnings through productivity gains. But currently highly uncertain, so crucial to see how the most affected perceive things + if potential AI ‘winners’+‘losers’ are mobilising politically. Strong anti-AI coalition could make it harder to realise AI’s potential gains. 4/16
October 28, 2025 at 11:35 AM
An important Q! Traditionally, tech replaced routine, working-class, non-graduate jobs. Now, the roles most exposed to AI — e.g. budget analysts, actuaries, graphic designers, paralegals — are professional. If econ. insecurity spreads to the middle classes, could upend politics. 3/16
October 28, 2025 at 11:35 AM
We link data on what job tasks AI can actually do (from @edfelten.bsky.social and others) with new 2024–25 survey data on how British workers feel AI will affect them (good/bad/neither/DK). Then we see if both real and perceived exposure to AI shapes support for government regulation of AI. 2/16
October 28, 2025 at 11:35 AM
Reposted by Zack Grant
To try to be restrained and persuasive:

1) Academic work (like most things) is of varying quality

2) There is an enormous leftward ideological skew to the lowest-quality work

3) 1 + 2 makes it hard for credentialed "expertise" to be credible even when most experts are good
September 26, 2025 at 8:44 PM
How much credence do you give the theory that pollsters' samples of the youth electorate are fairly underrepresentative (even w/ common weights)? I've heard response rates among males without degrees under 30ish tend to be pretty abysmal, and you would expect those to be more right-leaning?
September 7, 2025 at 2:06 AM
Very strong start to the conference!
June 26, 2025 at 10:55 AM
Oldshoremore?
February 27, 2025 at 11:18 PM