Oskar van der Wal
ovdw.bsky.social
Oskar van der Wal
@ovdw.bsky.social
Technology specialist at the EU AI Office / AI Safety / Prev: University of Amsterdam, EleutherAI, BigScience

Thoughts & opinions are my own and do not necessarily represent my employer.
Reposted by Oskar van der Wal
✈️ Headed to @iclr-conf.bsky.social — whether you’ll be there in person or tuning in remotely, I’d love to connect!

We’ll be presenting our paper on pre-training stability in language models and the PolyPythias 🧵

🔗 ArXiv: arxiv.org/abs/2503.09543
🤗 PolyPythias: huggingface.co/collections/...
April 22, 2025 at 11:02 AM
Reposted by Oskar van der Wal
Work in progress -- suggestions for NLP-ers based in the EU/Europe & already on Bluesky very welcome!

go.bsky.app/NZDc31B
November 10, 2024 at 5:24 PM
Last week, we organized the workshop "New Perspectives on Bias and Discrimination in Language Technology" 🤖 @uvahumanities.bsky.social @amsterdamnlp.bsky.social

We're looking back at two inspiring days of talks, posters, and discussions—thanks to everyone who participated!

wai-amsterdam.github.io
November 15, 2024 at 4:36 PM
This is a friendly reminder that there are 7 days left for submitting your extended abstract to this workshop!

(Since the workshop is non-archival, previously published work is welcome too. So consider submitting previous/future work to join the discussion in Amsterdam!)
Working on #bias & #discrimination in #NLP? Passionate about integrating insights from different disciplines? And do you want to discuss current limitations of #LLM bias mitigation work? 🤖
👋Join the workshop New Perspectives on Bias and Discrimination in Language Technology 4&5 Nov in #Amsterdam!
Workshop: New Perspectives on Bias and Discrimination in Language Technology.
Workshop: New Perspectives on Bias and Discrimination in Language Technology.
wai-amsterdam.github.io
September 8, 2024 at 4:45 PM
Working on #bias & #discrimination in #NLP? Passionate about integrating insights from different disciplines? And do you want to discuss current limitations of #LLM bias mitigation work? 🤖
👋Join the workshop New Perspectives on Bias and Discrimination in Language Technology 4&5 Nov in #Amsterdam!
Workshop: New Perspectives on Bias and Discrimination in Language Technology.
Workshop: New Perspectives on Bias and Discrimination in Language Technology.
wai-amsterdam.github.io
August 7, 2024 at 2:21 PM
Reposted by Oskar van der Wal
release day release day 🥳 OLMo 1b +7b out today and 65b soon...

OLMo accelerates the study of LMs. We release *everything*, from toolkit for creating data (Dolma) to train/inf code

blog blog.allenai.org/olmo-open-la...
olmo paper allenai.org/olmo/olmo-pa...
dolma paper allenai.org/olmo/dolma-p...
OLMo: Open Language Model
A State-Of-The-Art, Truly Open LLM and Framework
blog.allenai.org
February 1, 2024 at 7:33 PM
Reposted by Oskar van der Wal
@michahu.bsky.social did an interview laying out our recent paper containing the figures I insist on calling "the mona lisa of training visualizations"
Uncovering the Phases of Neural Network Training: Insights from CDS’ Michael Hu
The idea that neural networks undergo distinct developmental phases during training has long been a subject of debate and fascination. CDS…
nyudatascience.medium.com
January 31, 2024 at 10:30 PM
A 🧵thread about strategies for improving social bias evaluations of LMs. #blueskAI 🤖

bsky.app/profile/ovdw...
I am super excited to share that our paper "Undesirable Biases in NLP: Addressing Challenges of Measurement" has been published in JAIR!
doi.org/10.1613/jair...
Undesirable Biases in NLP: Addressing Challenges of Measurement | Journal of Artificial In...
doi.org
January 24, 2024 at 9:55 AM
I am super excited to share that our paper "Undesirable Biases in NLP: Addressing Challenges of Measurement" has been published in JAIR!
doi.org/10.1613/jair...
Undesirable Biases in NLP: Addressing Challenges of Measurement | Journal of Artificial In...
doi.org
January 24, 2024 at 9:21 AM
Reposted by Oskar van der Wal
I decided what we need to make blueskAI happen is a feed. Reply here to get added to the whitelist! Whitelisted users can post to the feed by adding the following keywords to a post:

🤖
bskAI
blueskAI
January 5, 2024 at 3:56 PM
It was exciting to present our recent work at BlackboxNLP in Singapore on low-level causal interventions for gender bias in GPT2 small: 📝"Identifying and Adapting Transformer-Components responsible for Gender Bias in an English Language Model"
aclanthology.org/2023.blackbo...
A summary 👇
Identifying and Adapting Transformer-Components Responsible for Gender Bias in an English Language M...
Abhijith Chintam, Rahel Beloch, Willem Zuidema, Michael Hanna, Oskar van der Wal. Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP. 2023.
aclanthology.org
December 11, 2023 at 4:24 PM
Reposted by Oskar van der Wal
Together with Antoine Bertin & Cristina Tarquini we're raising funds to build a data-driven sculpture of beluga whale vocalizations as perceived by an Earth Species Project AI model. Can artificial listening enrich our understanding of whale communication through an immersive physical experience? 🐳
December 10, 2023 at 3:19 PM
Reposted by Oskar van der Wal
New manifesto! How can AI scientists borrow from evolutionary biology to strengthen their evidence and claims? Featuring the beautiful eggs of the tawny flanked prinia.
The Parable of the Prinia’s Egg: An Allegory for AI Science | Naomi Saphra
I discuss what counts as strong evidence for an explanation of model behavior.
nsaphra.net
September 17, 2023 at 2:58 PM