Dr. Eric J. W. Orlowski
banner
swejwo.bsky.social
Dr. Eric J. W. Orlowski
@swejwo.bsky.social
Finally-Employed-Anthropologist.

Ph.D. in Sociocultural Anthropology | Research Fellow (AI Governance) @ NUS-AI Singapore | Swede (not the vegetable) |

Read what I write at https://readyaiminquire.substack.com.

The cover picture is no longer ironic.
13/ In other words: we need to (re)inject methodology before method.
December 9, 2025 at 8:52 AM
12/ This is why much AI-talk today is putting the cart before the horse: instead of frantically building *whatever*, much more emphasis must be placed on what is being built; why is it being built; how is it being built; and to what end? Not 'let's build something and then see how'.
December 9, 2025 at 8:52 AM
11/ I’m increasingly convinced “AI governance” should be talked about as a process you run, not a 'thing' you ship.
December 9, 2025 at 8:52 AM
10/ Rule of thumb: if it can’t learn from incidents and update itself, it’s not governance. It’s a snapshot you’ll keep pointing at while the situation drifts.
December 9, 2025 at 8:52 AM
9/ So the artefacts—model cards, risk registers, audits—should be receipts. Useful receipts! But still receipts, not the meal.
December 9, 2025 at 8:52 AM
8/ And yes, AI isn’t a stable target. Even if the model never changes, the world around it does: new users, new edge cases, new incentives, new abuses, new politics.
December 9, 2025 at 8:52 AM
7/ The practical bits look like: when reviews get triggered, who has veto power, how decisions get recorded (including the uncomfortable trade-offs), what gets monitored, and what happens when something breaks.
December 9, 2025 at 8:52 AM
6/ Inside organisations: if “governance” can’t actually slow down or stop a deployment, it’s basically a vibes document with footnotes.
December 9, 2025 at 8:52 AM
5/ At the national level: strategies and laws are the headline. The real story is the plumbing—who can act, when they act, what data they see, what gets enforced, what gets revised after things go sideways.
December 9, 2025 at 8:52 AM
4/ Principles are fine, but they’re not governance. They’re just the aspirations. Governance is what happens when those aspirations meet deadlines, incentives, uncertainty, and “oh no, that’s not what users are doing with it”.
December 9, 2025 at 8:52 AM
3/ But governance isn’t an object. It’s more like… upkeep. Ongoing work. The boring (important) stuff you keep doing because reality keeps changing.
December 9, 2025 at 8:52 AM
2/ One thing that keeps bugging me: we talk about “AI governance” like it’s a thing you can finish. A framework. A document. A checklist. Done ✅
December 9, 2025 at 8:52 AM
Thanks I hate it.
December 1, 2025 at 10:40 AM
11/
And more conversations like this!
December 1, 2025 at 9:53 AM
10/
There’s a long road ahead, but this is the work that matters.

More intentionality.

More grounding in lived realities.

More humility about the limits of the machine.

Fundamentally this is a human challenge, not purely a technical one. Not all challenges can be engineered away.
December 1, 2025 at 9:53 AM
9/
Cultural alignment isn’t a feature to toggle.

It’s a socio-technical commitment.
And it will only work if we treat it as such; collaboratively, reflexively, and with humility about what AI cannot know.
December 1, 2025 at 9:53 AM
8/
– foreground methodological rigour
– centre local cultural contexts
– involve social scientists + communities early
– admit the limits of current architectures
– and design for specific use cases, not mythical universals.
December 1, 2025 at 9:53 AM
7/
This is why intentionality isn’t optional.

If we want meaningful cultural alignment, we need to build processes that:
December 1, 2025 at 9:53 AM
6/
Most cultures, especially low-resource and oral ones, rely on:
gesture, tone, ritual, interaction, shared history, silence, embodiment…
None of that appears in typical training data.

These are all things they can't be scraped.
December 1, 2025 at 9:53 AM
5/
Another point from the panel (and one I’ve written about as well):
LLMs see an extremely narrow window into human culture.

They learn mainly from written text, which is a tiny slice of how cultures actually transmit meaning.
December 1, 2025 at 9:53 AM
4/
If we want culturally aligned AI, we have to design for it on purpose; not hope it emerges from scale, benchmarks, or clever prompting.
December 1, 2025 at 9:53 AM
3/
In my own research, I often describe culture as fractal.

Zoom in or zoom out, the complexity stays.

Multiple layers, overlapping identities, situational norms.

It’s lived, embodied, contextual.
December 1, 2025 at 9:53 AM
2/
My main point was: intentionality matters.

A lot of AI work still treats culture as something you can “vibe-code” into models by scraping more text. But culture doesn’t work like that; not in any society I’ve ever studied.
December 1, 2025 at 9:53 AM