blair
banner
blairaf.com
blair
@blairaf.com
Asst. Prof, Amii Fellow, sci-fi creator @ UAlberta

Transfeminist AI governance, ethics, policy: blairaf.com

Gay trans cyborg warriors vs. AI monsters: objecttype3.app
Pinned
blair @blairaf.com · Apr 10
Why is AI governance so often ineffective at preventing AI from harming people and the planet? My new paper Transfeminist AI Governance addresses this question, now out in this month's issue of First Monday: firstmonday.org/ojs/index.ph...
Reposted by blair
My new article on Canada's recent AI strategy consultation. Lightning-fast engagement, dubious data protection measures, black box analysis, & AI-generated summaries from made-in-USA LLMs are not a winning strategy for building public trust

betakit.com/canadas-new-...
Canada’s new AI strategy is off to a bad start | BetaKit
Canadians already have low trust in AI. Exclusionary and unclear public engagement methods aren’t helping.
betakit.com
February 9, 2026 at 9:32 PM
Reposted by blair
even if i had absolutely no knowledge of elon musk or his history, i would have a lot of questions about someone who claimed they were going to build settlements on the moon within a decade, like, “what experience do you have in doing this on earth?”
February 9, 2026 at 9:57 PM
My new article on Canada's recent AI strategy consultation. Lightning-fast engagement, dubious data protection measures, black box analysis, & AI-generated summaries from made-in-USA LLMs are not a winning strategy for building public trust

betakit.com/canadas-new-...
Canada’s new AI strategy is off to a bad start | BetaKit
Canadians already have low trust in AI. Exclusionary and unclear public engagement methods aren’t helping.
betakit.com
February 9, 2026 at 9:32 PM
Reposted by blair
The main conceit of "AI" (namely chatbots and "agents") are the twin promises of control and productivity.

The mechanism of control is that you can be replaced more effectively by a machine, or someone who wields the machine more adroitly than you.
The bosses are forcing their employees to use it. Politicians are competing to shovel money & infrastructure at it. The billionaires who control media are constantly lecturing & hectoring people to accept it & use it more.

They don't care it's losing money. They don't care people hate it.
February 8, 2026 at 12:15 AM
Reposted by blair
I think "woke" was at least partly identified with "irritating" and the way we're rectifying that in Woke 2 is by being scary instead
February 6, 2026 at 2:52 AM
Reposted by blair
I used to love computer it was my friend. Now I have hate in my heart
February 5, 2026 at 9:17 PM
Reading week plans
February 7, 2026 at 6:48 PM
Reposted by blair
'Should we go all in on AI? Let's ask AI.'
February 7, 2026 at 3:30 AM
Reposted by blair
Me in 2021: I wish we had a more integrative national AI strategy and more public consultation on the AI strategy

Monkey paw in 2026:
Canada did a consultation on a new national AI strategy, formed an expert task force to write 32 reports, then used AI to analyze the responses & reports. The result is a summary that strings together 100s of vague action items & flattens nuance and policy trade-offs into false consensus
February 6, 2026 at 4:43 PM
Spending Friday night eating fried chicken and writing an angry essay about the government life is great
February 7, 2026 at 12:54 AM
Reposted by blair
ICYMI:

950+ Google workers signed a petition demanding that executives "disclose all contracts and collaboration with DHS, CBP, and ICE, and to divest from these partnerships."
www.nytimes.com/2026/02/06/b...
Google Workers Demand End to Cloud Services for Immigration Agencies
www.nytimes.com
February 6, 2026 at 11:37 PM
Reposted by blair
art has already been democratized by innovations like digital drawing tablets , libre/ FOSS image editing programs, & the internet.

we have incredible creative tools available for either free or very inexpensively. democracy requires effort & work!
February 6, 2026 at 9:51 PM
Reposted by blair
This is without a doubt one of the best things I’ve read in a while.
blairaf.com blair @blairaf.com · Apr 10
Why is AI governance so often ineffective at preventing AI from harming people and the planet? My new paper Transfeminist AI Governance addresses this question, now out in this month's issue of First Monday: firstmonday.org/ojs/index.ph...
February 6, 2026 at 9:01 PM
Me in 2021: I wish we had a more integrative national AI strategy and more public consultation on the AI strategy

Monkey paw in 2026:
Canada did a consultation on a new national AI strategy, formed an expert task force to write 32 reports, then used AI to analyze the responses & reports. The result is a summary that strings together 100s of vague action items & flattens nuance and policy trade-offs into false consensus
February 6, 2026 at 4:43 PM
Reposted by blair
Another thought: did any part of the consultation form obtain participant consent for feeding their responses into an opaque daisychain of models developed by foreign tech companies? Was PII filtered out of all 64k responses before pushing them through the pipeline?
Canada did a consultation on a new national AI strategy, formed an expert task force to write 32 reports, then used AI to analyze the responses & reports. The result is a summary that strings together 100s of vague action items & flattens nuance and policy trade-offs into false consensus
February 5, 2026 at 3:40 AM
Reposted by blair
Huge missed opportunity for Canada to show global leadership in AI governance. We could be a frontrunner on so many participatory methods - open legislative development and policy workshops, public awareness campaigns, citizens assemblies - instead we're making AI slop and calling it innovation
Canada did a consultation on a new national AI strategy, formed an expert task force to write 32 reports, then used AI to analyze the responses & reports. The result is a summary that strings together 100s of vague action items & flattens nuance and policy trade-offs into false consensus
February 5, 2026 at 1:36 AM
Another thought: did any part of the consultation form obtain participant consent for feeding their responses into an opaque daisychain of models developed by foreign tech companies? Was PII filtered out of all 64k responses before pushing them through the pipeline?
Canada did a consultation on a new national AI strategy, formed an expert task force to write 32 reports, then used AI to analyze the responses & reports. The result is a summary that strings together 100s of vague action items & flattens nuance and policy trade-offs into false consensus
February 5, 2026 at 3:40 AM
Reposted by blair
They asked for a consultation on AI and then replaced the results of the consultation with an AI output and from this it looks as if my input that they should not use AI was somehow not part of the AI output
Canada did a consultation on a new national AI strategy, formed an expert task force to write 32 reports, then used AI to analyze the responses & reports. The result is a summary that strings together 100s of vague action items & flattens nuance and policy trade-offs into false consensus
February 5, 2026 at 3:34 AM
Reposted by blair
The lack of real introspection about what kinds of data are actually inside a lot of these models, and the failure to develop proper auditing frameworks is really problematic. Without these, government is just funding hype, and that is not any kind of sustainable investment.
February 5, 2026 at 3:29 AM
Reposted by blair
I did the gov't questionnaire and every question was loaded with the assumption AI would be used in every ministry. There was no room to consider IF AI was appropriate.
February 5, 2026 at 3:28 AM
Reposted by blair
If your goal as a government is to build public trust, throwing public feedback into an opaque analytics pipeline seems counterproductive to that goal
Canada did a consultation on a new national AI strategy, formed an expert task force to write 32 reports, then used AI to analyze the responses & reports. The result is a summary that strings together 100s of vague action items & flattens nuance and policy trade-offs into false consensus
February 4, 2026 at 8:59 PM
Reposted by blair
Wild. It is EXACTLY like university administrators approach.
Canada did a consultation on a new national AI strategy, formed an expert task force to write 32 reports, then used AI to analyze the responses & reports. The result is a summary that strings together 100s of vague action items & flattens nuance and policy trade-offs into false consensus
February 5, 2026 at 3:10 AM
Reposted by blair
canada continuing to drop the ball on taking tech policy seriously
Canada did a consultation on a new national AI strategy, formed an expert task force to write 32 reports, then used AI to analyze the responses & reports. The result is a summary that strings together 100s of vague action items & flattens nuance and policy trade-offs into false consensus
February 5, 2026 at 3:02 AM
Inverse transfem voice training to make my voice lower and dykier
February 5, 2026 at 2:45 AM
Huge missed opportunity for Canada to show global leadership in AI governance. We could be a frontrunner on so many participatory methods - open legislative development and policy workshops, public awareness campaigns, citizens assemblies - instead we're making AI slop and calling it innovation
Canada did a consultation on a new national AI strategy, formed an expert task force to write 32 reports, then used AI to analyze the responses & reports. The result is a summary that strings together 100s of vague action items & flattens nuance and policy trade-offs into false consensus
February 5, 2026 at 1:36 AM