Bogdan Kulynych
bogdankulynych.bsky.social
Bogdan Kulynych
@bogdankulynych.bsky.social
researcher studying privacy, security, reliability, and broader social implications of algorithmic systems · fake doctor working at a real hospital
website: https://kulyny.ch
Reposted by Bogdan Kulynych
What do you do after you’re done jumping the shark?

Whatever it is, Nature Careers is all in.
November 11, 2025 at 10:41 PM
Reposted by Bogdan Kulynych
Scientists and scholars in AI and its social impacts call on von der Leyen to retract #AIHype statement.

@olivia.science
@abeba.bsky.social
@irisvanrooij.bsky.social
@alexhanna.bsky.social
@rocher.lc
@danmcquillan.bsky.social
@robin.berjon.com
& many others have signed

www.iccl.ie/press-releas...
Scientists call on the President of the European Commission to retract AI hype statement
Experts in AI call on the President of the European Commission to retract unscientific AI hype statement she made in the budget speech.
www.iccl.ie
November 10, 2025 at 9:48 AM
Reposted by Bogdan Kulynych
Russia’s success in poisoning LLMs with lies, and the effects it has on both AI and politics, reflects Russia’s much deeper understanding of how societies operates than much of Silicon Valley - and how important social sciences are in understanding and waging information warfare
‘In a twist that befuddled researchers for a year, almost no human beings visit the sites, which are hard to browse or search. Instead, their content is aimed at crawlers, the software programs that scour the web & bring back content for search engines & LLMs’
www.washingtonpost.com/technology/2...
Russia seeds chatbots with lies. Any bad actor could game AI the same way.
In their race to push out new versions with more capability, AI companies leave users vulnerable to “LLM grooming” efforts that promote bogus information.
www.washingtonpost.com
November 3, 2025 at 7:59 AM
Reposted by Bogdan Kulynych
arXiv will no longer accept review articles and position papers unless they have been accepted at a journal or a conference and complete successful peer review.

This is due to being overwhelmed by a hundreds of AI generated papers a month.

Yet another open submission process killed by LLMs.
Attention Authors: Updated Practice for Review Articles and Position Papers in arXiv CS Category – arXiv blog
blog.arxiv.org
November 1, 2025 at 5:28 PM
Reposted by Bogdan Kulynych
Pretends to be shocked
www.bbc.co.uk/mediacentre/...
October 23, 2025 at 8:18 AM
Reposted by Bogdan Kulynych
The viral "Definition of AGI" paper tells you to read fake references which do not exist!

Proof: different articles present at the specified journal/volume/page number, and their titles exist nowhere on any searchable repository.

Take this as a warning to not use LMs to generate your references!
October 18, 2025 at 12:54 AM
Reposted by Bogdan Kulynych
imho — anyone who equates a human with an app or a machine today is just dehumanizing people and stripping people of their (dwindling, already eroding, not well respected) rights.
October 16, 2025 at 9:53 AM
Reposted by Bogdan Kulynych
Keynote at #COLM2025: Nicholas Carlini from Anthropic

"Are language models worth it?"

Explains that the prior decade of his work on adversarial images, while it taught us a lot, isn't very applied; it's unlikely anyone is actually altering images of cats in scary ways.
October 9, 2025 at 1:12 PM
Reposted by Bogdan Kulynych
I said a thing :).
October 3, 2025 at 9:27 PM
Reposted by Bogdan Kulynych
OpenAI's rapid rush into education has been achieved by habituating users through training programs, institutional lock-ins, strategic marketing partnerships, and third party integrations that together are helping it become infrastructural to teaching and learning. It's going to be hard to get out.
September 26, 2025 at 10:19 PM
Reposted by Bogdan Kulynych
"We are told that AI is inevitable, that we must adapt or be left behind. But universities are not tech companies. Our role is to foster critical thinking, not to follow industry trends uncritically." www.ru.nl/en/research/...
September 12, 2025 at 10:45 AM
Reposted by Bogdan Kulynych
After 2 years in press, it's published!

"Talkin' 'Bout AI Generation: Copyright and the Generative-AI Supply Chain," is out in the 72nd volume of the Journal of the Copyright Society

copyrightsociety.org/journal-entr...

written with @katherinelee.bsky.social & @jtlg.bsky.social (2023)
TALKIN' 'BOUT AI GENERATION: COPYRIGHT AND THE GENERATIVE-AI SUPPLY CHAIN | The Copyright Society
We know copyright
copyrightsociety.org
September 10, 2025 at 7:12 PM
Reposted by Bogdan Kulynych
In a new paper, I try to resolve the counterintuitive evidence of Meehl’s “clinical vs statistical prediction” problems: Statistics only wins because the game is rigged.
The Actuary's Final Word on Algorithmic Decision Making
Paul Meehl's foundational work "Clinical versus Statistical Prediction," provided early theoretical justification and empirical evidence of the superiority of statistical methods over clinical judgmen...
arxiv.org
September 8, 2025 at 2:48 PM
Reposted by Bogdan Kulynych
If kids’ schools trained them for work based on what everyone thought the hot new technology was going to be, both my kids would have spent the past several years learning about the blockchain. This is why schools don’t attempt to do workplace training: life is pretty long.
August 9, 2025 at 11:23 AM
There are many similarities with the AI discourse now and the early web. There too were some utopian visions of the future like in this famous Barlow's declaration.

20 years after, the www is a great technology, yet here we are now with all the dopamine-driven design and social polarization.
August 9, 2025 at 12:27 PM
From claims of "Ph.D. level" intelligence about generative models and calls for knowledge work to be replaced by these models, to Musk's claim that there is no research only engineering, to — all are manifestations of anti-intelluctualism.
Likening PhD holders to a (non-functional) algorithm is a form of dehumanisation & anti-intellectualism that is a bellwether for contemporary fascism. Essentially it's a typical — if not the archetypal — first step towards fascism: to dehumanise, deskill, defund, and, ultimately, fire the academics.
What I would like to remind everyone talking about Sam Altman talking about the “PhD level intelligence” of the new ChatGPT is that Sam Altman dropped out of college so he… has no experiential construct for what grad school even is.
August 9, 2025 at 12:05 PM
Reposted by Bogdan Kulynych
This is absolutely what I expect as well.
My guess is the AI bubble popping will be similar to the dot com bubble popping at the turn of the century. There’s a real technological advancement, it will have real long-term impact on the world, but a lot of the money now is hype, FOMO, and irrational exuberance.

But I guess we’ll see.
Couldn't be more obvious that this bubble popping is going to fundamentally destroy our economy.
August 3, 2025 at 2:28 PM
Reposted by Bogdan Kulynych
Excellent post @aarontay.bsky.social on how there are so many LLMs in modern library search pipelines, each applying content moderation filters (or subject to cloud providers'), leading to unexpected & unwanted censorship on topics like Gaza and race rioting. aarontay.substack.com/p/the-ai-pow...
The AI powered Library Search That Refused to Search
From Clarivate's Summon to Primo Research Assistant, content‑moderation layers meant mostly for chatbots are "quietly" blocking controversial topics from being searched
aarontay.substack.com
July 30, 2025 at 10:53 AM
Reposted by Bogdan Kulynych
Rachel L. Draelos, Samina Afreen, Barbara Blasko, Tiffany Brazile, Natasha Chase, Dimple Desai, Jessica Evert, Heather L. Gardner, Lauren Herrmann, Aswathy Vaikom House, ...
Large language models provide unsafe answers to patient-posed medical questions
https://arxiv.org/abs/2507.18905
July 28, 2025 at 6:20 AM
Reposted by Bogdan Kulynych
firefox even does this for you
July 12, 2025 at 11:03 AM
Reposted by Bogdan Kulynych
The whole "Grok can be re-tuned at Elon's will to spout off like Hitler" thing sorta punctures the "LLM chatbots are AGI" discourse just a little bit, doesn't it.
July 11, 2025 at 6:11 PM
New preprint on the most precise as of yet mapping between differential privacy and common operational notions of privacy risk used in practice:
Unifying Re-Identification, Attribute Inference, and Data Reconstruction Risks in Differential Privacy

Bogdan Kulynych, Juan Felipe Gomez, Georgios Kaissis, Jamie Hayes, Borja Balle, Flavio du Pin Calmon, Jean Louis Raisaro

http://arxiv.org/abs/2507.06969
July 10, 2025 at 10:05 AM
Reposted by Bogdan Kulynych
Some ✨ personal news ✨: I'm starting my independent consultancy, focused on helping organizations do good things with privacy-enhancing technology 🎉

It's called Hiding Nemo, and you can read all about it on our website ➡️ https://hiding-nemo.com 🪸
Hiding Nemo — Unlock the value of your data, without losing user trust.
An independent consultancy helping organizations do more with data in a safe and respectful way, with built-in compliance. Get in touch!
hiding-nemo.com
July 8, 2025 at 1:30 PM
Ah, the AI for Good Summit, complete with a cybertruck display.
July 8, 2025 at 10:40 AM
Reposted by Bogdan Kulynych
ExplainableAI has long frustrated me by lacking a clear theory of what an explanation should do. Improve use of a model for what? How? Given a task what's max effect explanation could have? It's complicated bc most methods are functions of features & prediction but not true state being predicted 1/
July 2, 2025 at 4:53 PM