Timnit Gebru
timnitgebru.bsky.social
Timnit Gebru
@timnitgebru.bsky.social

Personal Account

Founder: The Distributed AI Research Institute @dairinstitute.bsky.social.

Author: The View from Somewhere, a memoir & manifesto arguing for a technological future that serves our communities (to be published by One Signal / Atria .. more

Timnit Gebru is an Eritrean Ethiopian-born computer scientist who works in the fields of artificial intelligence (AI), algorithmic bias and data mining. She is a co-founder of Black in AI, an advocacy group that has pushed for more Black roles in AI development and research. She is the founder of the Distributed Artificial Intelligence Research Institute (DAIR). .. more

Computer science 77%
Mathematics 7%
Pinned
🔇🔇🔇Announcing new work from DAIR which is very close to my heart, 3 years in the making.

When #TigrayGenocide, the deadliest genocide of the 21st century thus far, started in November 2020, it was 1 month before I got fired from Google. 🧵

english.elpais.com/internationa...
Ethiopia’s forgotten war is the deadliest of the 21st century, with around 600,000 civilian deaths
Estimates by European institutions and academics say over half a million non-combatants have died during the Tigray conflict as a result of a government blockade that kept out humanitarian aid
english.elpais.com

Reposted by Timnit Gebru

After transformers then it wasn't even solely deep learning that became synonymous with AI, or even any task using transformers, but a specific setup with image generators, LLMs and the like, where the task is to take an input of a specific modality and generate an output of some modality.

And then when it goes out of favor that thing gets downgraded from "AI".

Every time a particular technique and/or task shows a high performance on some benchmark that was considered difficult at the time, it becomes elevated to "AI".

After AlexNet won the ImageNet challenge in 2012, deep learning became synonymous with "AI", and deep learning also became synonymous with all of machine learning even though the deep learning people were unpopular within machine learning before.

Reposted by Timnit Gebru

Reposted by Timnit Gebru

Reposted by Timnit Gebru

Yes he was against the name changed and harassed the people advocating for it 🙄

Oh yes Playhdoh Flamingos as we call him, because we know he likes the attention when mentioned in name, has harassed me so much that it was the subject of an article.

www.theverge.com/22309962/tim...
Timnit Gebru was fired from Google — then the harassers arrived
For months, Pedro Domingos and Michael Lissack have engaged in a campaign to discredit Gebru’s research.
www.theverge.com

Reposted by Timnit Gebru

Reposted by Timnit Gebru

Thanks for writing and re-upping this. I think terminology is important, but I don't think regulating based on technical or statistical properties (e.g., "generative" or "transformer") is appropriate. I think the mode of interaction and epistemic function is more important for regulation.

Reposted by Timnit Gebru

Reposted by Timnit Gebru

Good thing we didn't.

I'd say that was a win for "open science." The absolute hypocrisy of these people.

Maybe don't use your power to order researchers to retract papers you don't find to be sufficiently full of hype if you claim to care about "open research"?

Apparently Fernando Pereira is donating to OpenReview (with a public announcement) because he cares about "open science" or "open research". News to me!

He's the Google VP who wrote an anonymous "privileged and confidential" letter, sent to HR instructing us to retract our Stochastic Parrots paper.

Reposted by Timnit Gebru

obligatory plug of this piece @kara-williams.bsky.social and I wrote a few months ago explaining the stakes of policymaking with vague/improper ideas of what AI means

consumerfed.org/specific-ter...

Even under ML, are kernel techniques AI? Or is it just neural networks and the term "deep learning" which I learned was created to avoid the stigma around "neural networks" that existed in ML before?

What about information theory? Those in signal processing may know the level to which the much more hypes field of machine learning uses so many concepts from information theory. But machine learning wasn't "AI" before.

Techniques using statistical ML weren't bucketed under "AI" when those techniques weren't the things that showed high performance on whatever tasks people thought were "AI" tasks in the 80s.

Are decision trees "AI" now? How about 40 years ago? What's the difference?

Is the resulting model AI or the technique that is used to build the model or the task that said model is solving? What about statistics? Is that "AI"?

Is logistic regression "AI"? What about convex optimization? Is LLM AI? What about an image recognition system built without using "feature learning" techniques but done through "feature engineering"?

This is also bucketed under the same term as a plant recognition tool trained exclusively to do that task and on that data.

This is then bucketed under the same term as the different types of techniques that are used to train this tool vs the resulting tool itself.

This is why @emilymbender.bsky.social @alexhanna.bsky.social say to name the specific thing rather than calling it "AI".

Right now, ChatGPT is bucketed under the same term as a model train to translate one specific language into another and which is carefully trained through curated data.
Wildly different things, tasks, techniques, subspecialties being lumped into "AI" and then being conflated with each other, doesn't help. Different types of models vs the techniques to train them vs the tasks they are supposed to accomplish, all under "AI".

Reposted by Timnit Gebru