Fentaw Abitew
banner
abitew.bsky.social
Fentaw Abitew
@abitew.bsky.social
Working at the intersection of AI governance, socio-technical systems, and global equity. Founder at woglo. Focused on how technology reshapes power, autonomy, and the future of collective agency.

Researching AI, law, and society |
Tangled AI governance

Roberts & Ziosi show why ISO, IEC, & … can’t govern alone—> too slow, too narrow, too captured.
Their fix—>
—> SDOs for trailing-edge risks (risk, compliance, process)
—> Agile actors for the frontier (evals, red-teaming). Exactly what my professor calls Tangled Governance
Can we standardise the frontier of AI? - Oxford Martin AIGI
International standards have been promoted as a mechanism that can support the governance of advanced AI through explicating national-level regulations, supporting interoperability between different j...
aigi.ox.ac.uk
June 10, 2025 at 9:58 PM
To claim “no AI industry without Meta” flattens collective progress into branding. Meta played a key role—PyTorch, LLaMA, FAIR. No doubt.

But let’s not rewrite history. Modern AI stands on decades of work in math, optimization, and theory—long before FAIR!
June 8, 2025 at 7:04 PM
AI systems are deployed at scale without any statutory mechanism for independent oversight. Lacking legal safeguards or disclosure infrastructure, safety remains self-declared. This paper proposes a framework for third-party flaw disclosure to contest that asymmetry

hai.stanford.edu/news/a-frame...
A Framework to Report AI’s Flaws | Stanford HAI
Pointing to "white-hat" hacking, AI policy experts recommend a new system of third-party reporting and tracking of AI’s flaws.
hai.stanford.edu
June 7, 2025 at 1:14 PM
Fragmented Global AI Governance
June 7, 2025 at 12:00 PM
10-year federal ban on state AI laws proposed.
Broad scope, no federal replacement.
Bipartisan resistance growing.
Not coordination—erasure.

Who decides how AI is governed?
June 6, 2025 at 11:10 AM
Can long-term strategy withstand short-term politics—or will misalignment put national priorities at risk?
June 5, 2025 at 9:15 PM
Need a framework for when to use AI—and when not?

This paper offer FIRE as a test. If a decision is:
→ Forward-looking
→ Individual and context-specific
→ Requires causal reasoning
→ Emerges through experimentation--then it’s not a task to fully hand off to a model.
papers.ssrn.com/sol3/papers....
Artificial Intelligence and Actor-Specific Decisions
Artificial intelligence (AI) is increasingly seen as potentially replacing humans in decision making and problem solving across numerous domains. We argue that
papers.ssrn.com
June 5, 2025 at 3:47 PM