Pop Stefanija
popstefanija.bsky.social
Pop Stefanija
@popstefanija.bsky.social
postPhD. social sciences and AI at @imec_smit, vub. hyperreal. algorithmically processed datafied human. hates even numbers. stays sane by knitting. likes to lean on people & things.
master students at my uni can submit an early thesis in may. today we received the submissions, and in comparison with two years ago, we have double more submissions. Usually most students submit in july. it makes me wonder if the allowed use of chatgpt&co has anything to do with it 👀
May 26, 2025 at 1:08 PM
a late afternoon in brussel
May 11, 2025 at 6:29 PM
The US embassy sent a questionnaire to Flemish universities regarding their diversity policies. Interfering in the sovereignty of the country much?
May 8, 2025 at 12:07 PM
the damage genAI does on epistemic processes is unprecedented. i am writing atm an opinion piece on GenAI’s epistemic authority. what makes me enraged more is the universities’ (not all!) mild policies for genAI use
“Massive numbers of students are going to emerge from university with degrees, and into the workforce, who are essentially illiterate…Both in the literal sense and in the sense of being historically illiterate and having no knowledge of their own culture, much less anyone else’s.”
Everyone Is Cheating Their Way Through College
ChatGPT has unraveled the entire academic project.
nymag.com
May 7, 2025 at 3:12 PM
Reposted by Pop Stefanija
"We identified Do by cross-referencing data from massive credential leaks, which are publicly available via breach databases....burner emails, IP addresses, repeated usernames, and a unique password reveal a more than decade-long digital trail that allowed researchers to link him to MrDeepFakes."
May 7, 2025 at 9:15 AM
Reposted by Pop Stefanija
Actually the increase in LLM errors is NOT surprising. There is absolutely no connection between mathematical sophistication and a grip on TRUTH. AI is just prediction of low-level parameters.
"The newest and most powerful technologies — so-called reasoning systems from companies like OpenAI, Google and the Chinese start-up DeepSeek — are generating more errors, not fewer. As their math skills have notably improved, their handle on facts has gotten shakier. It is not entirely clear why."
A.I. Hallucinations Are Getting Worse, Even as New Systems Become More Powerful
A new wave of “reasoning” systems from companies like OpenAI is producing incorrect information more often. Even the companies don’t know why.
www.nytimes.com
May 5, 2025 at 12:31 PM
the monday morning academic urge to never submit to a journal again
May 5, 2025 at 7:52 AM
fresh from the bookstore: @jathansadowski.com’s and @bcmerchant.bsky.social’s books on luddites and tech. i might be into something…
May 3, 2025 at 9:11 PM
damn, here we go again. hello world!
May 3, 2025 at 9:00 PM