Konrad Hinsen
khinsen.scholar.social.ap.brid.gy
Konrad Hinsen
@khinsen.scholar.social.ap.brid.gy
Researcher at CNRS. Computational science, in particular computational biophysics. Metascience, in particular the evolution of science in the digital era […]

[bridged from https://scholar.social/@khinsen on the fediverse by https://fed.brid.gy/ ]
Reposted by Konrad Hinsen
JOB: Research Scientist at Wikimedia

"We’re hiring a Research Scientist strongly committed to the principles of free knowledge, open source, privacy, and collaboration to join the Research team. As a Research Scientist, you will conduct applied research on the integrity of Wikipedia knowledge […]
Original post on mastodon.online
mastodon.online
December 27, 2025 at 10:14 AM
Sad commit messages of 2025: "Remove URLs for Wikipedia to prevent AI crawlers from exploiting them"

https://codeberg.org/khinsen/hyperdoc-demo/commit/026ab2ed89b3d9d83e58c53c6ca136b23d328f27
"-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd.
To prevent IE from showing its internal.
codeberg.org
December 26, 2025 at 11:32 AM
Reposted by Konrad Hinsen
"Dear all,

I am writing this to inform you that the Institute for Geometry and its Applications (IGA) is coming to an end. As the IGA does not meet the requirements to be a University Research Centre it can no longer continue, even in name. Consequently the IGA webpage will disappear along with […]
Original post on mathstodon.xyz
mathstodon.xyz
December 25, 2025 at 6:52 AM
Reposted by Konrad Hinsen
Did you know: you can avoid the need for backups by failing to do anything worth saving a copy of

Follow me for more computing lifehacks
December 16, 2025 at 3:49 AM
Reposted by Konrad Hinsen
RE: https://mastodon.thenewoil.org/@thenewoil/115773722160154268

Easy! What's the worst that could happen here... 🫣 Just casually converting a 40+ year legacy codebase to Rust (using "AI" of course): "Our North Star is ‘1 engineer, 1 month, 1 million lines of code.’"

Welcome to Software […]
December 24, 2025 at 1:24 PM
Reposted by Konrad Hinsen
this question is coming up a lot so i'll pin it
- replacement for github: https://codeberg.org
- replacement for github pages: https://grebedoc.dev
- easy ci runners for codeberg: https://github.com/whitequark/nixos-forgejo-actions-runner
Codeberg.org
Codeberg is a non-profit community-led organization that aims to help free and open source projects prosper by giving them a safe and friendly home.
codeberg.org
December 23, 2025 at 1:24 PM
Reposted by Konrad Hinsen
The @distrowatch end-of-year roundup does not pull its punches, in an admirable way:

«
Some distributions, particularly the commercial projects, shifted focus this year, discarding useful tools and replacing them with AI buzzwords, less capable installers, and broken core packages. We saw Red […]
Original post on social.vivaldi.net
social.vivaldi.net
December 22, 2025 at 12:14 PM
Reposted by Konrad Hinsen
Happy 50th birthday to Scheme!

MIT AI Memo No. 349, ‘Scheme: An Interpreter for Extended Lambda Calculus’ was published on December 22, 1975

https://dspace.mit.edu/handle/1721.1/5794
December 22, 2025 at 9:56 AM
Reposted by Konrad Hinsen
As someone who so far just used pipenv for his #python dependency management (as most of my stuff is web dev that isn't packaged but rather deployed):

If I'd want to write a library that I wanna properly package/release, which dependency manager would you recommend when starting a project from […]
Original post on scholar.social
scholar.social
December 20, 2025 at 4:20 PM
Reposted by Konrad Hinsen
For whatever reason, Github banned my IP for one hour and I need to see what my students have been doing.

They must contribute to the opensource project of their choice and 99% of them end up on Github

And, yes, I have a Github account even if I would like to get rid of it. But I can’t grade […]
Original post on mamot.fr
mamot.fr
December 17, 2025 at 5:11 PM
Reposted by Konrad Hinsen
once again asking which libraries you feel are missing in the common lisp ecosystem

#commonlisp
December 18, 2025 at 9:13 AM
Reposted by Konrad Hinsen
«Imagine a carpenter who couldn’t figure out how to adjust their table saw, or a surgeon who shrugged and said something like, “I’m just not a scalpel person.” We would never accept that. But in the field of knowledge work, “I’m just not a tech person” has become a permanent identity instead of […]
Original post on scholar.social
scholar.social
December 18, 2025 at 3:08 PM
Reposted by Konrad Hinsen
«LLMs are copyright removal devices - copy open source (or proprietary!) data into it, and you get copyright free data on the other side that you are free to plagiarize into newly copyrighted works. While this process robs all copyright owners, it is particularly damaging to people involved in […]
Original post on scholar.social
scholar.social
December 18, 2025 at 9:33 PM
Reposted by Konrad Hinsen
«Digital commons is a bigger call than digital autonomy. It’s a better call because digital commons gets you digital autonomy, but not always the other way around. Almost everything we do today is Digitally Uncommon.»

Really great talk by @bert_hubert on where much of the tech is currently […]
Original post on scholar.social
scholar.social
December 15, 2025 at 5:32 PM
Reposted by Konrad Hinsen
Together with @ljrk, I'm running a #haecksen workshop at #39c3. This was put together quite spontaneously after the un-invitation of Joscha Bach, so I didn't manage to get a ticket yet.

I really want this workshop to happen! Would anyone have a spare ticket to sell to make that possible?
December 17, 2025 at 7:10 PM
Reposted by Konrad Hinsen
RE: https://hachyderm.io/@rOpenSci/115735884137280347

Nice to see @rOpenSci give some advice on how people can move/mirror to e.g. @Codeberg while continuing to be part of rOpenSci!
[blog] Code Hosting Options Beyond GitHub

rOpenSci activities generally rely on a GitHub-based workflow which can exclude users of other repositories. Here, @mpadge explores alternatives to GitHub for hosting code and describes ways in which to do so which also allow you to stay connected with […]
Original post on hachyderm.io
hachyderm.io
December 18, 2025 at 1:45 AM
Reposted by Konrad Hinsen
What does it mean for an online environment to be "toxic" or "polluted" and can we borrow ideas from environmental modeling to make online spaces more livable?

@gedankenstuecke has a post announcing a new CAT Lab project

https://citizensandtech.org/2025/12/online-communities-as-ecosystems/
**The metaphor of online communities that “have become toxic” or that are “being polluted” in different ways is a common one. But what do we mean when we talk about pollution and toxicity in online spaces; and what can we learn from the environmental sciences and natural ecosystems to improve things with and for communities? I invite you to think through this with us at CAT Lab.** The r/AskHistorians sub-reddit is one of the hundreds of thousands of online communities on Reddit to bring together people around different topics or missions. For r/AskHistorians, that mission is _public history_ – the exchange between learners and experts, where people can ask their questions about history, with the expectation to get answers from informed experts. Since its launch in 2011, this community has grown to around 2.5 million subscribers. And as for most communities, this exchange is enabled and stewarded by a team of volunteer moderators who create and enforce moderation rules to serve the community’s mission. In the case of r/AskHistorians, those rules include writing original, in-depth answers rooted in good historical practices. Content moderation, especially at such large scales, has always been work, but the amount of labor performed by the r/AskHistorians moderators drastically increased in 2022, when ChatGPT was released and “AI”-generated content started to be posted into the community _en masse_. CAT Lab Research Director Dr. Sarah Gilbert recently presented her research on these challenges at the 2025 conference of the _Society for Social Studies of Science_(4S): While the community already prohibited plagiarism, such as copy & pasting from Wikipedia, outputs of a Large Language Models (LLMs) like ChatGPT introduced a much bigger pool of potential plagiarism. Which first needs figuring out which posts are “AI”-generated, which might require discussions amongst moderators to form a consensus, to avoid falsely accusing posters of plagiarism. Especially as people like English language learners (or speakers of English dialects that have been involved with training “AI”) are at higher risks of being accused of using “AI’. Once a decision is made, moderators need to take action – banning them in case of r/AskHistorians – and ultimately deal with the potential negative backlash from users who feel treated unfairly as well as reviewing appeals of users who were banned. Jointly, the advent of such “AI”-generated contributions, even if done in a well-meaning way, has had a polluting effect on the community, by increasing volunteer-workloads and straining the moderators. ## Managing Pollution in Large-Scale Cooperation But /r/AskHistorians, or even reddit content moderation, is far from being unique in struggling from the “cultural pollution” that digital communities and people across the internet experience: On Wikipedia, volunteer editors are struggling to keep articles free from unsourced, “AI”-generated contributions, leading to the formation of the _WikiProject AI Cleanup_. Furthermore, machine-generated translations of Wikipedia articles into smaller languages are now polluting the editions of already vulnerable languages as well as the languages themselves. Free & Open Source Software (FOSS) is another digital commons affected. FOSS is mostly developed by software developers who volunteer their time at a scale where it has been estimated that it would cost some $4.2 billion to re-build their efforts in a commercial setting. “AI”-generated submissions strain those volunteers in two main ways: Firstly, by increasing the rate at which people try to contribute machine-generated code to FOSS projects, thus increasing the workload of the human reviewers and thus raising concerns about long-term sustainability and maintenance. Secondly, maintainers of FOSS projects report an increase of machine-generated bug reports that pollute their bug trackers, including reports for security vulnerabilities: Daniel Stenberg, maintainer of the _cURL_ program and library – which runs on virtually any digital system including by cars from nearly 50 manufacturers – speaks of a “death by a thousand slops”. He outlines how reviewing LLM-generated security vulnerabilities, which virtually always turn out to be false, take up significant amounts of his time. The impact of this type of “cultural pollution” is also felt by those who are not actively contributing to creating or maintaining online communities or commons: Websites solely based around machine generated content are proliferating, polluting both search engines and journalism, crowding our human-generated, high-quality journalism. The analogy of pollution that many of these communities and maintainers refer to seems like an apt one. It even predates the launch of generative AI systems that currently are the focus of this “digital pollution”: The related environmental concept of toxicity is a staple when discussing how people interact in online communities, references to which go back to at least the early 2000s. And more recently, people have argued that social media companies themselves should be viewed as potential polluters of society and how our information is being polluted. ## Going From Metaphors to Modeling Pollution in Online Cooperation As we will see, the goal of the “digital pollution” framing is **not** to call individual community participants or types of online cultures _per se_ as toxic or polluted. Instead, it can serve to understand how online ecosystems can suffer, despite lots of well-intentioned and well-meaning interactions. Understanding these pollution dynamics is not just of academic interest, it might also help with modeling online interactions. Which in turn can help design interventions that have the potential to support moderators and improve online communities. If we look at “pollution” more closely, in which ways do different factors in “commons pollution” mirror environmental pollution? Firstly, both environmental pollution and digital pollution can come in different shapes and forms. If we just think of water pollution, we have point source pollution, in which a single, identifiable source such as a factory discharges harmful materials into bodies of water. Online, we can find similar “point sources” in targeted misinformation campaigns, run by humans or bots. But there are also more diffuse types of “nonpoint source pollution”, which in environmental pollution could be agricultural runoff of fertilizers that ends up in streams. In those cases, there can be an excess of “nutrients” that create eutrophication that allows for algal or bacterial blooms that deplete all oxygen in a body of water and which in turn leads to mass fish die-offs. In our online communities or commons, similar “nonpoint source pollution” could be a drastic increase in new contributions due to a technology like generative “AI” or even an increased rate of new human contributors who aren’t familiar with community norms. If treating “newcomers” as a potential pollutant seems strange, this is another factor where the pollution analogy might hold for both environmental & cultural pollution: In toxicity, “the dose makes the poison” is a common refrain to for the idea that there can be too much of a good thing, in human health, the environment as well as for online communities. While fertilizers and other products in agricultural run-off are productively used in the right dose, their accumulation in bodies of water is what creates the eutrophication that leads to the algal blooms. And in our online communities, an influx of new members is welcome if they can engage in “productive” contributions, but if the moderation and community engagement systems get overwhelmed, such increases can be harmful. Which might lead to another interesting similarity between environmental and digital pollution: Both can be rooted in exogenous shocks. Examples in the environment can be catastrophic events like oil spills – or heavy rains such as the ones that happened during the Paris Olympics and that overwhelmed the basins that were designed to prevent wastewater flowing into the Seine, leading to a contamination of the river. In our digital pollution, a similar shock exogenous could be world events/news, but algorithmic recommendations, which lead to a big and sudden influx of new community members, similar to what Ed has shown in his data visualizations on the impact of algorithmic recommendations. ## Working with Communities to Model and Intervene on Digital Pollution Beyond these interesting parallels, is there a way we can learn something or benefit from treating the idea of “pollution” as more than just a metaphor? Water pollution as well as other forms of pollution have been academically studied for at least 100 years. As a result of this, there exists a very rich and broad set of mathematical tools to model and understand pollution and how it impacts the environment. A famous example from water pollution is the Streeter-Phelps equation, which was made in 1925 as part of a pollution study of the Ohio River. The equation models the impact that organic mass that gets into a stream, as it happens through agricultural or urban run-offs, has on the levels of dissolved oxygen depending on the distance or time from where/when the pollution occurred. This in turn lets one understand if or how a stream can support life at different times/distances from the pollution. The question is, could methods such as these be adapted to understand the dynamics of online communities or digital commons? Or what would similar models for better understanding digital pollution look like? We would like to explore these questions and are looking for people to join us in this. If you are equally intrigued by this, get in touch. You can reach me via email at bg493@cornell.edu, on Mastodon under @gedankenstuecke@scholar.social, and find many more contact methods on my website. ### **Footnotes** Gilbert, S. A., (2025, September 6). Reluctant Saviors: Volunteer moderation and social media collapse [Panel Presentation]. Society for the Social Studies of Science (4S), Seattle.
citizensandtech.org
December 17, 2025 at 3:09 PM
Reposted by Konrad Hinsen
"That invisibility has created a misconception, in some quarters, that #rss is a relic. But the opposite is true: we’ve never relied on it more. And as the social web fractures, as platforms wall off content, and as AI agents begin remixing everything they can ingest, our dependence on neutral […]
Original post on mastodon.social
mastodon.social
December 17, 2025 at 9:19 AM
Reposted by Konrad Hinsen
"May the bridges we burn light the way" is definitely an album title for our times...

#omniumgatherum #melodicdeath
December 16, 2025 at 7:54 PM
Reposted by Konrad Hinsen
New (European) Let’s Encrypt alternative just dropped!! https://european-alternatives.eu/product/actalis-ssl

Anybody got experiences with Actalis?
Actalis SSL | European Alternatives
Actalis is an Italian certification authority
european-alternatives.eu
December 16, 2025 at 5:27 PM
Reposted by Konrad Hinsen
AI is going to make the word "adoption" toxic in tech.
www.404media.co/anthropic-ex...
Anthropic Exec Forces AI Chatbot on Gay Discord Community, Members Flee
“We’re bringing a new kind of sentience into existence,” Anthropic's Jason Clinton said after launching the bot.
www.404media.co
December 16, 2025 at 3:22 PM
Reposted by Konrad Hinsen
avec

* Sarah Cohen-Boulakia (Univ. Paris-Saclay) sur la question de la reproductibilité
* Abdelghani Maddi (CNRS) sur la manipulation des métriques
* Ivan Oransky (Retraction Watch) sur les rétractations d’articles – intervention en anglais
* Michel Dubois (OFIS) pour la clôture

Tous les […]
Original post on universites.social
universites.social
December 16, 2025 at 5:15 PM