Planet Debian
planet.debian.org.web.brid.gy
Planet Debian
@planet.debian.org.web.brid.gy
Thomas Lange: 42.000 FAI.me jobs created
The FAI.me service has reached another milestone: The **42.000th** job was submitted via the web interface since the beginning of this service in 2017. The idea was to provide a simple web interface for end users for creating the configs for the fully automatic installation with only minimal questions and without knowing the syntax of the configuration files. Thanks a lot for using this service and for all your feedback. **The next job can be yours!** P.S.: I like to get more feedback for the FAI.me service. What do you like most? What's missing? Do you have any success story how you use the customized ISO for your deployment? Please fill out the FAI questionaire or sent feedback via email to fai.me@fai-project.org ### About FAI.me FAI.me is the service for building your own customized images via a web interface. You can create an installation or live ISO or a cloud image. For Debian, multiple release versions can be chosen, as well as installations for Ubuntu Server, Ubuntu Desktop, or Linux Mint. Multiple options are available like selecting different desktop environments, the language and keyboard and adding a user with a password. Optional settings include adding your own package list, choosing a backports kernel, adding a postinst script and adding a ssh public key, choosing a partition layout and some more.
blog.fai-project.org
February 18, 2026 at 8:16 PM
Antoine Beaupré: net-tools to iproute cheat sheet
This is also known as: "`ifconfig` is not installed by default anymore, how do I do this only with the `ip` command?" I have been slowly training my brain to use the new commands but I sometimes forget some. So, here's a couple of equivalence from the old package to `net-tools` the new `iproute2`, about 10 years late: `net-tools` | `iproute2` | shorter form | what it does ---|---|---|--- `arp -an` | `ip neighbor` | `ip n` | `ifconfig` | `ip address` | `ip a` | show current IP address `ifconfig` | `ip link` | `ip l` | show link stats (up/down/packet counts) `route` | `ip route` | `ip r` | show or modify the routing table `route add default GATEWAY` | `ip route add default via GATEWAY` | `ip r a default via GATEWAY` | add default route to `GATEWAY` `route del ROUTE` | `ip route del ROUTE` | `ip r d ROUTE` | remove `ROUTE` (e.g. `default`) `netstat -anpe` | `ss --all --numeric --processes --extended` | `ss -anpe` | list listening processes, less pretty # Another trick Also note that I often alias `ip` to `ip -br -c` as it provides a much prettier output. Compare, before: anarcat@angela:~> ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host noprefixroute valid_lft forever preferred_lft forever 2: wlan0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000 link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff permaddr xx:xx:xx:xx:xx:xx altname wlp166s0 altname wlx8cf8c57333c7 4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000 link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 valid_lft forever preferred_lft forever 20: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff inet 192.168.0.108/24 brd 192.168.0.255 scope global dynamic noprefixroute eth0 valid_lft 40699sec preferred_lft 40699sec After: anarcat@angela:~> ip -br -c a lo UNKNOWN 127.0.0.1/8 ::1/128 wlan0 DOWN virbr0 DOWN 192.168.122.1/24 eth0 UP 192.168.0.108/24 I don't even need to redact MAC addresses! It also affects the display of the other commands, which look similarly neat. Also imagine pretty colors above. Finally, I don't have a cheat sheet for `iw` vs `iwconfig` (from `wireless-tools`) yet. I just use NetworkManager now and rarely have to mess with wireless interfaces directly. # Background and history For context, there are traditionally two ways of configuring the network in Linux: * the old way, with commands like `ifconfig`, `arp`, `route` and `netstat`, those are part of the net-tools package * the new way, mostly (but not entirely!) wrapped in a single `ip` command, that is the iproute2 package It seems like the latter was made "important" in Debian in 2008, which means every release since Debian 5 "lenny" has featured the `ip` command. The former `net-tools` package was demoted in December 2016 which means every release since Debian 9 "stretch" ships _without_ an `ifconfig` command unless explicitly requested. Note that this was mentioned in the release notes in a similar (but, IMHO, less useful) table. (Technically, the `net-tools` Debian package source still indicates it is `Priority: important` but that's a bug I have just filed.) Finally, and perhaps more importantly, the name `iproute` is hilarious if you are a bilingual french speaker: it can be read as "I proute" which can be interpreted as "I fart" as "prout!" is the sound a fart makes. The fact that it's called `iproute2` makes it only more hilarious.
anarc.at
February 18, 2026 at 6:16 PM
Freexian Collaborators: Monthly report about Debian Long Term Support, January 2026 (by Santiago Ruano Rincón)
The Debian LTS Team, funded by Freexian’s Debian LTS offering, is pleased to report its activities for January. ### Activity summary During the month of January, 20 contributors have been paid to work on Debian LTS (links to individual contributor reports are located below). The team released 33 DLAs fixing 216 CVEs. The team continued preparing security updates in its usual rhythm. Beyond the updates targeting Debian 11 (“bullseye”), which is the current release under LTS, the team also proposed updates for more recent releases (Debian 12 (“bookworm”) and Debian 13 (“trixie”)), including Debian unstable. We highlight several notable security updates here below. Notable security updates: * python3.9, prepared by Andrej Shadura (DLA-4455-1), fixing multiple vulnerabilities in the Python interpreter. * php, prepared by Guilhem Moulin (DLA-4447-1), fixing two vulnerabilities that could yield to request forgery or denial of service. * apache2, prepared by Bastien Roucariès DLA-4452-1, fixing four CVEs. * linux-6.1, prepared by Ben Hutchings (DLA-4436-1), as a regular update of the linux 6.1 backport to Debian 11. * python-django, prepared by Chris Lamb (DLA-4458-1), resolving multiple vulnerabilities. * firefox-esr prepared by Emilio Pozuelo Monfort (DLA-4439-1) * gnupg2, prepared by Roberto Sánchez (DLA-4437-1), fixing multiple issues, including CVE-2025-68973 that could potentially be exploited to execute arbitrary code. * apache-log4j2, prepared by Markus Koschany (DLA-4444-1) * ceph, prepared by Utkarsh Gupta (DLA-4460-1) * inetutils, prepared by Andreas Henriksson (DLA-4453-1), fixing an authentication bypass in telnetd. Moreover, Sylvain Beucler studied the security support status of p7zip, a fork of 7zip that has become unmaintained upstream. To avoid letting the users continue using an unsupported package, Sylvain has investigated a path forward in collaboration with the security team and the 7zip maintainer, looking to replace p7zip with 7zip. It is to note however that 7zip developers don’t reveal the information about the patches that fix CVEs, making it difficult to backport single patches to fix vulnerabilities in Debian released versions. Contributions from outside the LTS Team: Thunderbird, prepared by maintainer Christoph Goehre. The DLA (DLA-4442-1) was published by Emilio. The LTS Team has also contributed with updates to the latest Debian releases: * Bastien uploaded gpsd to unstable, and proposed updates for trixie #1126121 and bookworm #1126168 to fix two CVEs. * Bastien also prepared the imagemagick updates for trixie and bookworm, released as DSA-6111-1, along with the bullseye update DLA-4448-1. * Chris proposed a trixie point update for python-django (#112646), and the work for bookworm was completed in February (#1079454). The longstanding bookworm update required tracking down a regression in the django-storages packages. * Markus prepared tomcat10 updates for trixie and bookworm (DSA-6120-1), and tomcat11 for trixie (DSA-6121-1) * Thorsten Alteholz prepared bookworm point updates for zvbi (#1126167) to fix five CVEs; taglib (#1126273) to fix one CVE; and libuev (#1126370) to fix one CVE. * Utkarsh prepared an unstable update of node-lodash to fix one CVE. Other than the work related to updates, Sylvain made several improvements to the documentation and tooling used by the team. ### Individual Debian LTS contributor reports * Abhijith PA * Andreas Henriksson * Andrej Shadura * Bastien Roucariès * Ben Hutchings * Carlos Henrique Lima Melara * Chris Lamb * Daniel Leidert * Emilio Pozuelo Monfort * Guilhem Moulin * Jochen Sprickerhof * Lee Garrett * Markus Koschany * Paride Legovini * Roberto C. Sánchez * Santiago Ruano Rincón * Sylvain Beucler * Thorsten Alteholz * Tobias Frost * Utkarsh Gupta ### Thanks to our sponsors Sponsors that joined recently are in bold. * Platinum sponsors: * Toshiba Corporation (for 124 months) * Civil Infrastructure Platform (CIP) (for 92 months) * VyOS Inc (for 56 months) * Gold sponsors: * F. Hoffmann-La Roche AG (for 134 months) * CONET Deutschland GmbH (for 118 months) * Plat’Home (for 117 months) * University of Oxford (for 74 months) * EDF SA (for 46 months) * Dataport AöR (for 21 months) * CERN (for 19 months) * Silver sponsors: * Domeneshop AS (for 139 months) * Nantes Métropole (for 133 months) * Akamai - Linode (for 129 months) * Univention GmbH (for 125 months) * Université Jean Monnet de St Etienne (for 125 months) * Ribbon Communications, Inc. (for 119 months) * Exonet B.V. (for 109 months) * Leibniz Rechenzentrum (for 103 months) * Ministère de l’Europe et des Affaires Étrangères (for 87 months) * Dinahosting SL (for 74 months) * Upsun Formerly Platform.sh (for 68 months) * Deveryware (for 62 months) * Moxa Inc. (for 62 months) * sipgate GmbH (for 60 months) * OVH US LLC (for 58 months) * Tilburg University (for 58 months) * GSI Helmholtzzentrum für Schwerionenforschung GmbH (for 49 months) * THINline s.r.o. (for 22 months) * Copenhagen Airports A/S (for 16 months) * **Conseil Départemental de l’Isère** * Bronze sponsors: * Seznam.cz, a.s. (for 140 months) * Evolix (for 139 months) * Linuxhotel GmbH (for 137 months) * Intevation GmbH (for 136 months) * Daevel SARL (for 135 months) * Megaspace Internet Services GmbH (for 134 months) * Greenbone AG (for 133 months) * NUMLOG (for 133 months) * WinGo AG (for 132 months) * Entr’ouvert (for 124 months) * Adfinis AG (for 121 months) * Laboratoire LEGI - UMR 5519 / CNRS (for 116 months) * Tesorion (for 116 months) * Bearstech (for 107 months) * LiHAS (for 107 months) * Catalyst IT Ltd (for 102 months) * Demarcq SAS (for 96 months) * Université Grenoble Alpes (for 82 months) * TouchWeb SAS (for 74 months) * SPiN AG (for 71 months) * CoreFiling (for 67 months) * Observatoire des Sciences de l’Univers de Grenoble (for 58 months) * Tem Innovations GmbH (for 53 months) * WordFinder.pro (for 53 months) * CNRS DT INSU Résif (for 51 months) * Soliton Systems K.K. (for 47 months) * Alter Way (for 44 months) * Institut Camille Jordan (for 34 months) * SOBIS Software GmbH (for 19 months) * Tuxera Inc. (for 10 months) * **OPM-OP AS**
www.freexian.com
February 18, 2026 at 4:16 PM
Dirk Eddelbuettel: qlcal 0.1.0 on CRAN: Easier Calendar Switching
The eighteenth release of the qlcal package arrivied at CRAN today. There have been no calendar updates in QuantLib 1.41 or 1.42 so it has been relatively quiet since the last release last summer but we now added a nice new feature (more below) leading to a new minor release version. qlcal delivers the calendaring parts of QuantLib. It is provided (for the R package) as a set of included files, so the package is self-contained and does not depend on an external QuantLib library (which can be demanding to build). qlcal covers over sixty country / market calendars and can compute holiday lists, its complement (_i.e._ business day lists) and much more. Examples are in the README at the repository, the package page, and course at the CRAN package page. This releases makes it (much) easier to work with multiple calendars. The previous setup remains: the package keeps one ‘global’ (and hidden) calendar object which can be set, queried, altered, etc. But now we added the ability to hold instantiated calendar objects in R. These are external pointer objects, and we can pass them to functions requiring a calendar. If no such optional argument is given, we fall back to the global default as before. Similarly for functions operating on one or more dates, we now simply default to the current date if none is given. That means we can now say > sapply(c("UnitedStates/NYSE", "Canada/TSX", "Australia/ASX"), \(x) qlcal::isBusinessDay(xp=qlcal::getCalendar(x))) UnitedStates/NYSE Canada/TSX Australia/ASX TRUE TRUE TRUE > to query today (February 18) in several markets, or compare to two days ago when Canada and the US both observed a holiday > sapply(c("UnitedStates/NYSE", "Canada/TSX", "Australia/ASX"), \(x) qlcal::isBusinessDay(as.Date("2026-02-16"), xp=qlcal::getCalendar(x))) UnitedStates/NYSE Canada/TSX Australia/ASX FALSE FALSE TRUE > The full details from `NEWS.Rd` follow. > #### Changes in version 0.1.0 (2026-02-18) > > * Invalid calendars return id ‘TARGET’ now > > * Calendar object can be created on the fly and passed to the date-calculating functions; if missing global one used > > * For several functions a missing date object now implies computation on the current date, e.g. `isBusinessDay()` > > Courtesy of my CRANberries, there is a diffstat report for this release. See the project page and package documentation for more details, and more examples. This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.
dirk.eddelbuettel.com
February 18, 2026 at 4:16 PM
Antoine Beaupré: Keeping track of decisions using the ADR model
In the Tor Project system Administrator's team (colloquially known as TPA), we've recently changed how we take decisions, which means you'll get clearer communications from us about upcoming changes or _targeted_ questions about a proposal. Note that this change only affects the TPA team. At Tor, each team has its own way of coordinating and making decisions, and so far this process is only used inside TPA. We encourage other teams inside and outside Tor to evaluate this process to see if it can improve your processes and documentation. # The new process We had traditionally been using a "RFC" ("Request For Comments") process and have recently switched to "ADR" ("Architecture Decision Record"). The ADR process is, for us, pretty simple. It consists of three things: 1. a simpler template 2. a simpler process 3. communication guidelines separate from the decision record ## The template As team lead, the first thing I did was to propose a new template (in ADR-100), a variation of the Nygard template. The TPA variation of the template is similarly simple, as it has only 5 headings, and is worth quoting in full: * **Context** : What is the issue that we're seeing that is motivating this decision or change? * **Decision** : What is the change that we're proposing and/or doing? * **Consequences** : What becomes easier or more difficult to do because of this change? * **More Information** (optional): What else should we know? For larger projects, consider including a timeline and cost estimate, along with the impact on affected users (perhaps including existing Personas). Generally, this includes a short evaluation of alternatives considered. * **Metadata** : status, decision date, decision makers, consulted, informed users, and link to a discussion forum The previous RFC template had **17** (seventeen!) headings, which encouraged much longer documents. Now, the decision record will be easier to read and digest at one glance. An immediate effect of this is that I've started using GitLab issues more for comparisons and brainstorming. Instead of dumping in a document all sorts of details like pricing or in-depth alternatives comparison, we record those in the discussion issue, keeping the document shorter. ## The process The whole process is simple enough that it's worth quoting in full as well: > Major decisions are introduced to stakeholders in a meeting, smaller ones by email. A delay allows people to submit final comments before adoption. Now, of course, the devil is in the details (and ADR-101), but the point is to keep things simple. A crucial aspect of the proposal, which Jacob Kaplan-Moss calls the one weird trick, is to "decide who decides". Our previous process was vague about who makes the decision and the new template (and process) clarifies decision makers, for each decision. Inversely, some decisions degenerate into endless discussions around trivial issues because _too many_ stakeholders are consulted, a problem known as the Law of triviality, also known as the "Bike Shed syndrome". The new process better identifies stakeholders: * "informed" users (previously "affected users") * "consulted" (previously undefined!) * "decision maker" (instead of the vague "approval") Picking those stakeholders is still tricky, but our definitions are more explicit and aligned to the classic RACI matrix (Responsible, Accountable, Consulted, Informed). ## Communication guidelines Finally, a crucial part of the process (ADR-102) is to decouple the act of making and recording decisions from _communicating_ about the decision. Those are two _radically_ different problems to solve. We have found that a single document can't serve both purposes. Because ADRs can affect a wide range of things, we don't have a specific template for communications. We suggest the Five Ws method (Who? What? When? Where? Why?) and, again, to keep things simple. # How we got there The ADR process is not something I invented. I first stumbled upon it in the Thunderbird Android project. Then, in parallel, I was in the process of reviewing the RFC process, following Jacob Kaplan-Moss's criticism of the RFC process. Essentially, he argues that: 1. the RFC process "doesn't include any sort of decision-making framework" 2. "RFC processes tend to lead to endless discussion" 3. the process "rewards people who can write to exhaustion" 4. "these processes are insensitive to expertise", "power dynamics and power structures" And, indeed, I have been guilty of a lot of those issues. A verbose writer, I have written extremely long proposals that I suspect no one has ever fully read. Some proposals were adopted by exhaustion, or ignored because not looping in the right stakeholders. Our discussion issue on the topic has more details on the issues I found with our RFC process. But to give credit to the old process, it did serve us well while it was there: it's better than nothing, and it allowed us to document a staggering number of changes and decisions (95 RFCs!) made over the course of 6 years of work. # What's next? We're still experimenting with the communication around decisions, as this text might suggest. Because it's a separate step, we also have a tendency to forget or postpone it, like this post, which comes a couple of months late. Previously, we'd just ship a copy of the RFC to everyone, which was easy and quick, but incomprehensible to most. Now we need to write a separate communication, which is more work but, hopefully, worth the as the result is more digestible. We can't wait to hear what you think of the new process and how it works for you, here or in the discussion issue! We're particularly interested in people that are already using a similar process, or that will adopt one after reading this. > Note: this article was also published on the Tor Blog.
anarc.at
February 17, 2026 at 12:11 AM
Philipp Kern: What is happening with this "connection verification"?
You might see a verification screen pop up on more and more Debian web properties. Unfortunately the AI world of today is meeting web hosts that use Perl CGIs and are not built as multi-tiered scalable serving systems. The issues have been at three layers: 1. Apache's serving capacity runs full - with no threads left to serve requests. This means that your connection will sit around for a long time, not getting accepted. In theory this can be configured, but that would require requests to be handled in time. 2. Startup costs of request handlers are too high, because we spawn a process for every request. This currently affects the BTS and dgit's browse interface. packages.debian.org has been fixed, which increased scalability sufficiently. 3. Requests themselves are too expensive to be served quickly - think git blame without caching. Optimally we would go and solve some scalability issues with the services, however there is also a question of how much we _want_ to be able to serve - as AI scraper demand is just a steady stream of requests that are not shown to humans. ### How is it implemented? DSA has now stood up some VMs with Varnish for proxying. Incoming TLS is provided by hitch, and TLS "on-loading" is done using haproxy. That way TLS goes in and TLS goes out. While Varnish does cache, if the content is cachable (e.g. does not depend on cookies) - that is not the primary reason for using it: It can be used for flexible query and response rewriting. If no cookie with a proof of work is provided, the user is redirected to a challenge page that does some webcrypto in Javascript - because that looked similar to what other projects do (e.g. haphash that originally inspired the solution). However so far it looks like scrapers generally do not run with Javascript enabled, so this whole crypto proof of work business could probably be replaced with just a Javascript-based redirect. The existing solution also has big (security) holes in it. And, as we found out, Firefox is slower at webcrypto than Chrome. I have recently reduced the complexity, so you should notice it blocking you significantly less. Once you have the cookie, you can keep accessing the site for as long as the cookie is valid. Please do not make any assumptions about the cookies, or you will be broken in the future. For legitimate scrapers that obey robots.txt, there is now an automatically generated IP allowlist in place (thanks, Marco d'Itri). Turns out that the search engines do not actually run Javascript either and then loudly complain about the redirect to the challenge page. Other bots are generally exempt. ### Conclusion I hope that right now we found sort of the sweet spot where the admins can stop spending human time on updating firewall rules and the services are generally available, reasonably fast, and still indexed. In case you see problems or run into a block with your own (legitimate) bots, please let me know.
debblog.philkern.de
February 16, 2026 at 8:11 PM
Antoine Beaupré: Kernel-only network configuration on Linux
What if I told you there is a way to configure the network on any Linux server that: 1. works across all distributions 2. doesn't require any software installed apart from the kernel and a boot loader (no `systemd-networkd`, `ifupdown`, `NetworkManager`, nothing) 3. is backwards compatible all the way back to Linux 2.0, in 1996 It has literally 8 different caveats on top of that, but is still totally worth your time. # Known options in Debian People following Debian development might have noticed there are now _four_ ways of configuring the network Debian system. At least that is what the Debian wiki claims, namely: * `ifupdown` (`/etc/network/interfaces`): traditional static configuration system, mostly for workstations and servers that has been there forever in Debian (since at least 2000), documented in the Debian wiki * NetworkManager: self-proclaimed "standard Linux network configuration", mostly used on desktops but technically supports servers as well, see the Debian wiki page (introduced in 2004) * `systemd-network`: used more for servers, see Debian reference Doc Chapter 5 (introduced some time around Debian 8 "jessie", in 2015) * Netplan: latest entry (2018), YAML-based configuration abstraction layer on top of the above two, see also Debian reference Doc Chapter 5 and the Debian wiki At this point, I feel `ifupdown` is on its way out, possibly replaced by `systemd-networkd`. NetworkManager already manages most desktop configurations. # A "new" network configuration system The method is this: * `ip=` on the Linux kernel command line: for servers with a single IPv4 or IPv6 address, no software required other than the kernel and a boot loader (since 2002 or older) > So by "new" I mean "new to me". This option is _really_ old. The `nfsroot.txt` where it is documented predates the git import of the Linux kernel: it's part of the 2005 git import of 2.6.12-rc2. That's already 20+ years old already. > > The oldest trace I found is in this 2002 commit, which imports the whole file at once, but the option might goes back as far as 1996-1997, if the copyright on the file is correct and the option was present back then. # What are you doing. The trick is to add an `ip=` parameter to the kernel's command-line. The syntax, as mentioned above, is in nfsroot.txt and looks like this: ip=<client-ip>:<server-ip>:<gw-ip>:<netmask>:<hostname>:<device>:<autoconf>:<dns0-ip>:<dns1-ip>:<ntp0-ip> Most settings are pretty self-explanatory, if you ignore the useless ones: * `<client-ip>`: IP address of the server * `<gw-ip>`: address of the gateway * `<netmask>`: netmask, in quad notation * `<device>`: interface name, if multiple available * `<autoconf>`: how to configure the interface, namely: * `off` or `none`: no autoconfiguration (static) * `on` or `any`: use any protocol (default) * `dhcp`, essentially like `on` for all intents and purposes Note that the Red Hat manual has a different opinion: ip=[<server-id>]:<gateway-IP-number>:<netmask>:<client-hostname>:inteface:[dhcp|dhcp6|auto6|on|any|none|off] It's essentially the same (although `server-id` is weird), and the `autoconf` variable has other settings, so that's a bit odd. # Examples For example, this command-line setting: ip=192.0.2.42::192.0.2.1:255.255.255.0:::off ... will set the IP address to 192.0.2.42/24 and the gateway to 192.0.2.1. This will properly guess the network interface if there's a single one. A DHCP only configuration will look like this: ip=::::::dhcp Of course, you don't want to type this by hand every time you boot the machine. That wouldn't work. You need to configure the kernel commandline, and that depends on your boot loader. ## GRUB With GRUB, you need to edit (on Debian), the file `/etc/default/grub` (ugh) and find a line like: GRUB_CMDLINE_LINUX= and change it to: GRUB_CMDLINE_LINUX=ip=::::::dhcp ## systemd-boot and UKI setups For `systemd-boot` UKI setups, it's simpler: just add the setting to the `/etc/kernel/cmdline` file. Don't forget to include anything that's non-default from `/proc/cmdline`. This assumes that is the `Cmdline=@` setting in `/etc/kernel/uki.conf`. See 2025-08-20-luks-ukify-conversion for my minimal documentation on this. ## Other systems This is perhaps where this is much less portable than it might first look, because of course each distribution has its own way of configuring those options. Here are some that I know of: * Arch (11 options, mostly `/etc/default/grub`, `/boot/loader/entries/arch.conf` for `systemd-boot` or `/etc/kernel/cmdline` for UKI) * Fedora (mostly `/etc/default/grub`, may be more RHEL mentions grubby, possibly some `systemd-boot` things here as well) * Gentoo (5 options, mostly `/etc/default/grub`, `/efi/loader/entries/gentoo-sources-kernel.conf` for `systemd-boot`, or `/etc/kernel/install.d/95-uki-with-custom-opts.install`) It's interesting that `/etc/default/grub` is consistent across all distributions above, while the `systemd-boot` setups are _all over the place_ (except for the UKI case), while I would have expected those be _more_ standard than GRUB. ## dropbear-initramfs If `dropbear-initramfs` is setup, it already _requires_ you to have such a configuration, and it might not work out of the box. This is because, by default, it _disables_ the interfaces configured in the kernel after completing its tasks (typically unlocking the encrypted disks). To fix this, you need to _disable_ that "feature": IFDOWN="none" This will keep `dropbear-initramfs` from disabling the configured interface. # Why? Traditionally, I've always setup my servers with `ifupdown` on servers and NetworkManager on laptops, because that's essentially the default. But on some machines, I've started using `systemd-networkd` because `ifupdown` has ... issues, particularly with reloading network configurations. `ifupdown` is a old hack, feels like legacy, and is Debian-specific. Not excited about configuring another service, I figured I would try something else: just configure the network at boot, through the kernel command-line. I was already doing such configurations for dropbear-initramfs (see this documentation), which requires the network the be up for unlocking the full-disk encryption keys. So in a sense, this is a "Don't Repeat Yourself" solution. # Caveats Also known as: "wait, that works?" Yes, it does! That said... 1. This is useful for servers where the network configuration will not change after boot. Of course, this won't work on laptops or any mobile device. 2. This only works for single interface configurations. If you have multiple interfaces, bridges, VLANs, wifi, none of this will work. 3. It does support IPv6 and feels like the best way to configure IPv6 hosts: true zero configuration. 4. It likely does _not_ work with a _dual-stack_ IPv4/IPv6 static configuration. It _might_ work with a _dynamic_ dual stack configuration, but I doubt it. 5. I don't know what happens when a DHCP lease expires. No daemon seems to be running so I assume leases are not renewed, so this is more useful for static configurations, which includes server-side reserved fixed IP addresses. (A non-renewed lease risks getting reallocated to another machine, which would cause an addressing conflict.) 6. It will not automatically reconfigure the interface on link changes, but `ifupdown` does not either. 7. It will _not_ write a good `resolv.conf` for you, that you need to configure separately. _Maybe_ passing those `dns0-ip` settings will work? Untested, but DNS is, after all, a mostly user-level implementation (typically in `libc`), the kernel doesn't (again, typically) care about DNS. 8. I have not really tested this at scale: only a single, test server at home. Yes, that's a lot of caveats, but it happens to cover a _lot_ of machines for me, and it works surprisingly well. My main doubts are about long-term DHCP behaviour, but I don't see why that would be a problem with a statically defined lease. # Cleanup Once you have this configuration, you don't need _any_ "user" level network system, so you can get rid of _everything_ : apt purge systemd-networkd ifupdown network-manager netplan.io Note that `ifupdown` (and probably others) leave stray files in (e.g.) `/etc/network` which you might want to cleanup, or keep in case all this fails and I have put you in utter misery. Configuration files for other packages might also be left behind, I haven't tested this, no warranty. # Credits This whole idea came from the A/I folks (not to be confused with AI) who have been doing this forever, thanks!
anarc.at
February 16, 2026 at 8:09 AM
Benjamin Mako Hill: Why do people participate in similar online communities?
_**Note:** I have not published blog posts about my academic papers over the past few years. To ensure that my blog contains a more comprehensive record of my published papers and to surface these for folks who missed them, I will be periodically (re)publishing blog posts about some “older” published projects._ It seems natural to think of online communities competing for the time and attention of their participants. Over the last few years, I’ve worked with a team of collaborators—led by Nathan TeBlunthuis—to use mathematical and statistical techniques from ecology to understand these dynamics. What we’ve found surprised us: competition between online communities is _rare and typically short-lived_. When we started this research, we figured competition would be most likely among communities discussing similar topics. As a first step, we identified clusters of such communities on Reddit. One surprising thing we noticed in our Reddit data was that many of these communities that used similar language also had very high levels of overlap among their users. This was puzzling: _why were the same groups of people talking to each other about the same things in different places? And why don’t they appear to be in competition with each other for their users’ time and activity?_ We didn’t know how to answer this question using quantitative methods. As a result, we recruited and interviewed 20 active participants in clusters of highly related subreddits with overlapping user bases (for example, one cluster was focused on vintage audio). We found that the answer to the puzzle lay in the fact that the people we talked to were looking for three distinct things from the communities they worked in: 1. The ability to connect to specific information and narrowly scoped discussions. 2. The ability to socialize with people who are similar to themselves. 3. Attention from the largest possible audience. Critically, we also found that these three things represented a “trilemma,” and that no single community can meet all three needs. You might find two of the three in a single community, but you could never have all three. Figure from “No Community Can Do Everything: Why People Participate in Similar Online Communities” depicts three key benefits that people seek from online communities and how individual communities tend not to optimally provide all three. For example, large communities tend not to afford a tight-knit homophilous community. The end result is something I recognize in how I engage with online communities on platforms like Reddit. People tend to engage with a portfolio of communities that vary in size, specialization, topical focus, and rules. Compared with any single community, such overlapping systems can provide a wider range of benefits. No community can do everything. * * * _This work was published as a paper at CSCW:_ TeBlunthuis, Nathan, Charles Kiene, Isabella Brown, Laura (Alia) Levi, Nicole McGinnis, and Benjamin Mako Hill. 2022. “No Community Can Do Everything: Why People Participate in Similar Online Communities.” _Proceedings of the ACM on Human-Computer Interaction_ 6 (CSCW1): 61:1-61:25. https://doi.org/10.1145/3512908. _This work was supported by the National Science Foundation (awards IIS-1908850, IIS-1910202, and GRFP-2016220885)._ A full list of acknowledgements is in the paper.
mako.cc
February 16, 2026 at 4:09 AM
Ian Jackson: Adopting tag2upload and modernising your Debian packaging
# Introduction tag2upload allows authorised Debian contributors to upload to Debian simply by pushing a signed git tag to Debian’s gitlab instance, Salsa. We have recently announced that tag2upload is, in our opinion, now very stable, and ready for general use by all Debian uploaders. tag2upload, as part of Debian’s git transition programme, is very flexible - it needs to support a large variety of maintainer practices. And it’s relatively unopinionated, wherever that’s possible. But, during the open beta, various contributors emailed us asking for Debian packaging git workflow advice and recommendations. This post is an attempt to give some more opinionated answers, and guide you through modernising your workflow. (This article is aimed squarely at Debian contributors. Much of it will make little sense to Debian outsiders.) * Why * Ease of development * Don’t fear a learning burden; instead, start forgetting all that nonsense * Properly publishing the source code * Adopting tag2upload - the minimal change * Overhauling your workflow, using advanced git-first tooling * Assumptions * Topics and tooling * Choosing the git branch format * Determine upstream git and stop using upstream tarballs * Convert the git branch * Change the source format * Sort out the documentation and metadata * Configure Salsa Merge Requests * Set up Salsa CI, and use it to block merges of bad changes * Day-to-day work * Making changes to the package * Test build * Uploading to Debian * Uploading a NEW package to Debian * New upstream version * Sponsorship * Incorporating an NMU * DFSG filtering (handling non-free files) * Common issues * Further reading # Why ## Ease of development git offers a far superior development experience to patches and tarballs. Moving tasks from a tarballs and patches representation to a normal, git-first, representation, makes everything simpler. dgit and tag2upload do automatically many things that have to be done manually, or with separate commands, in dput-based upload workflows. They will also save you from a variety of common mistakes. For example, you cannot accidentally overwrite an NMU, with tag2upload or dgit. These many safety catches mean that our software sometimes complains about things, or needs confirmation, when more primitive tooling just goes ahead. We think this is the right tradeoff: it’s part of the great care we take to avoid our software making messes. Software that has your back is very liberating for the user. tag2upload makes it possible to upload with very small amounts of data transfer, which is great in slow or unreliable network environments. The other week I did a git-debpush over mobile data while on a train in Switzerland; it completed in seconds. See the Day-to-day work section below to see how simple your life could be. ## Don’t fear a learning burden; instead, start forgetting all that nonsense Most Debian contributors have spent months or years learning how to work with Debian’s tooling. You may reasonably fear that our software is yet more bizarre, janky, and mistake-prone stuff to learn. We promise (and our users tell us) that’s not how it is. We have spent a lot of effort on providing a good user experience. Our new git-first tooling, especially dgit and tag2upload, is much simpler to use than source-package-based tooling, despite being more capable. The idiosyncrasies and bugs of source packages, and of the legacy archive, have been relentlessly worked around and papered over by our thousands of lines of thoroughly-tested defensive code. You too can forget all those confusing details, like our users have! After using our systems for a while you won’t look back. And, you shouldn’t fear trying it out. dgit and tag2upload are unlikely to make a mess. If something is wrong (or even doubtful), they will typically detect it, and stop. This does mean that starting to use tag2upload or dgit can involve resolving anomalies that previous tooling ignored, or passing additional options to reassure the system about your intentions. So admittedly it _isn’t_ always trivial to get your first push to succeed. ## Properly publishing the source code One of Debian’s foundational principles is that we publish the source code. Nowadays, the vast majority of us, and of our upstreams, are using git. We are doing this because git makes our life so much easier. But, without tag2upload or dgit, we aren’t _properly_ publishing our work! Yes, we typically put our git branch on Salsa, and point `Vcs-Git` at it. However: * The format of git branches on Salsa is not standardised. They might be patches-unapplied, patches-applied, bare `debian/`, or something even stranger. * There is no guarantee that the DEP-14 `debian/1.2.3-7` tag on salsa corresponds precisely to what was actually uploaded. dput-based tooling (such as `gbp buildpackage`) doesn’t cross-check the .dsc against git. * There is no guarantee that the presence of a DEP-14 tag even means that that version of package is in the archive. This means that the git repositories on Salsa cannot be used by anyone who needs things that are _systematic_ and _always correct_. They are OK for expert humans, but they are awkward (even hazardous) for Debian novices, and you cannot use them in automation. The real test is: could you use `Vcs-Git` and Salsa to build a Debian derivative? You could not. tag2upload and dgit _do_ solve this problem. When you upload, they: 1. Make a canonical-form (patches-applied) derivative of your git branch; 2. Ensure that there is a well-defined correspondence between the git tree and the source package; 3. Publish both the DEP-14 tag and a canonical-form `archive/debian/1.2.3-7` tag to a single central git depository, `*.dgit.debian.org`; 4. Record the git information in the `Dgit` field in `.dsc` so that clients can tell (using the ftpmaster API) that this was a git-based upload, what the corresponding git objects are, and where to find them. This dependably conveys your git history to users and downstreams, in a standard, systematic and discoverable way. tag2upload and dgit are the only system which achieves this. (The client is `dgit clone`, as advertised in e.g. dgit-user(7). For dput-based uploads, it falls back to importing the source package.) # Adopting tag2upload - the minimal change tag2upload is a substantial incremental improvement to many existing workflows. git-debpush is a drop-in replacement for building, signing, and uploading the source package. So, you can just adopt it _without_ completely overhauling your packaging practices. You and your co-maintainers can even mix-and-match tag2upload, dgit, and traditional approaches, for the same package. Start with the wiki page and git-debpush(1) (ideally from forky aka testing). **You _don’t_ need to do any of the other things recommended in this article.** # Overhauling your workflow, using advanced git-first tooling The rest of this article is a guide to adopting the best and most advanced git-based tooling for Debian packaging. ## Assumptions * Your current approach uses the “patches-unapplied” git branch format used with `gbp pq` and/or `quilt`, and often used with `git-buildpackage`. You previously used `gbp import-orig`. * You are fluent with git, and know how to use Merge Requests on gitlab (Salsa). You have your `origin` remote set to Salsa. * Your main Debian branch name on Salsa is `master`. Personally I think we should use `main` but changing your main branch name is outside the scope of this article. * You have enough familiarity with Debian packaging including concepts like source and binary packages, and NEW review. * Your co-maintainers are also adopting the new approach. tag2upload and dgit (and git-debrebase) are flexible tools and can help with many other scenarios too, and you can often mix-and-match different approaches. But, explaining every possibility would make this post far too confusing. ## Topics and tooling This article will guide you in adopting: * tag2upload * Patches-applied git branch for your packaging * Either plain git merge or git-debrebase * dgit when a with-binaries uploaded is needed (NEW) * git-based sponsorship * Salsa (gitlab), including Debian Salsa CI ## Choosing the git branch format In Debian we need to be able to modify the upstream-provided source code. Those modifications are the **Debian delta**. We need to somehow represent it in git. We recommend storing the delta _as git commits to those upstream files_ , by picking one of the following two approaches. > ###### rationale > > Much traditional Debian tooling like `quilt` and `gbp pq` uses the “patches-unapplied” branch format, which stores the delta as patch files in `debian/patches/`, in a git tree full of unmodified upstream files. This is clumsy to work with, and can even be an alarming beartrap for Debian outsiders. ##### git merge **Option 1: simply use git, directly, including git merge.** Just make changes directly to upstream files on your Debian branch, when necessary. Use plain `git merge` when merging from upstream. This is appropriate if your package has no or very few upstream changes. It is a good approach if the Debian maintainers and upstream maintainers work very closely, so that any needed changes for Debian are upstreamed quickly, and any desired behavioural differences can be arranged by configuration controlled from within `debian/`. This is the approach documented more fully in our workflow tutorial dgit-maint-merge(7). ##### git-debrebase **Option 2: Adopt git-debrebase.** git-debrebase helps maintain your delta as linear series of commits (very like a “topic branch” in git terminology). The delta can be reorganised, edited, and rebased. git-debrebase is designed to help you carry a significant and complicated delta series. The older versions of the Debian delta are preserved in the history. git-debrebase makes extra merges to make a fast-forwarding history out of the successive versions of the delta queue branch. This is the approach documented more fully in our workflow tutorial dgit-maint-debrebase(7). Examples of complex packages using this approach include src:xen and src:sbcl. ##### ## Determine upstream git and stop using upstream tarballs We recommend using upstream git, only and directly. You should ignore upstream tarballs completely. > ###### rationale > > Many maintainers have been importing upstream tarballs into git, for example by using `gbp import-orig`. But in reality the upstream tarball is an intermediate build product, not (just) source code. Using tarballs rather than git exposes us to additional supply chain attacks; indeed, the key activation part of the xz backdoor attack was hidden only in the tarball! > > git offers better traceability than so-called “pristine” upstream tarballs. (The word “pristine” is even a joke by the author of pristine-tar!) First, establish which upstream git tag corresponds to the version currently in Debian. From the sake of readability, I’m going to pretend that upstream version is `1.2.3`, and that upstream tagged it `v1.2.3`. Edit `debian/watch` to contain something like this: version=4 opts="mode=git" https://codeberg.org/team/package refs/tags/v(\d\S*) You may need to adjust the regexp, depending on your upstream’s tag name convention. If `debian/watch` had a `files-excluded`, you’ll need to make a filtered version of upstream git. ##### git-debrebase From now on we’ll generate our own .orig tarballs directly from git. > ###### rationale > > We need _some_ “upstream tarball” for the `3.0 (quilt)` source format to work with. It needs to correspond to the git commit we’re using as our upstream. We _don’t_ need or want to use a tarball from upstream for this. The `.orig` is just needed so a nice legacy Debian source package (`.dsc`) can be generated. Probably, the current `.orig` in the Debian archive, is an upstream tarball, which may be different to the output of git-archive and possibly even have different contents to what’s in git. The legacy archive has trouble with differing `.orig`s for the “same upstream version”. So we must — until the next upstream release — change our idea of the upstream version number. We’re going to add `+git` to Debian’s idea of the upstream version. Manually make a tag with that name: git tag -m "Compatibility tag for orig transition" v1.2.3+git v1.2.3~0 git push origin v1.2.3+git If you are doing the packaging overhaul at the same time as a new upstream version, you can skip this part. ##### ## Convert the git branch ##### git merge Prepare a new branch on top of upstream git, containing what we want: git branch -f old-master # make a note of the old git representation git reset --hard v1.2.3 # go back to the real upstream git tag git checkout old-master :debian # take debian/* from old-master git commit -m "Re-import Debian packaging on top of upstream git" git merge --allow-unrelated-histories -s ours -m "Make fast forward from tarball-based history" old-master git branch -d old-master # it's incorporated in our history now **If there are any patches, manually apply them** to your `main` branch with `git am`, and delete the patch files (`git rm -r debian/patches`, and commit). (If you’ve chosen this workflow, there should be hardly any patches,) > ###### rationale > > These are some pretty nasty git runes, indeed. They’re needed because we want to restart our Debian packaging on top of a possibly quite different notion of what the upstream is. ##### git-debrebase Convert the branch to git-debrebase format and rebase onto the upstream git: git-debrebase -fdiverged convert-from-gbp upstream/1.2.3 git-debrebase -fdiverged -fupstream-not-ff new-upstream 1.2.3+git If you had patches which patched generated files which are present only in the upstream tarball, and not in upstream git, you will encounter rebase conflicts. You can drop hunks editing those files, since those files are no longer going to be part of your view of the upstream source code at all. > ###### rationale > > The force option `-fupstream-not-ff` will be needed this one time because your existing Debian packaging history is (probably) not based directly on the upstream history. `-fdiverged` may be needed because git-debrebase might spot that your branch is not based on dgit-ish git history. ##### Manually make your history fast forward from the git import of your previous upload. dgit fetch git show dgit/dgit/sid:debian/changelog # check that you have the same version number git merge -s ours --allow-unrelated-histories -m 'Declare fast forward from pre-git-based history' dgit/dgit/sid ## Change the source format Delete any existing `debian/source/options` and/or `debian/source/local-options`. ##### git merge Change `debian/source/format` to `1.0`. Add `debian/source/options` containing `-sn`. > ###### rationale > > We are using the “1.0 native” source format. This is the simplest possible source format - just a tarball. We would prefer “3.0 (native)”, which has some advantages, but dpkg-source between 2013 (wheezy) and 2025 (trixie) inclusive unjustifiably rejects this configuration. > > You may receive bug reports from over-zealous folks complaining about the use of the 1.0 source format. You should close such reports, with a reference to this article and to #1106402. ##### git-debrebase Ensure that `debian/source/format` contains `3.0 (quilt)`. ##### Now you are ready to do a local test build. ## Sort out the documentation and metadata Edit `README.source` to at least mention dgit-maint-merge(7) or dgit-maint-debrebase(7), and to tell people not to try to edit or create anything in `debian/patches/`. Consider saying that uploads should be done via dgit or tag2upload. Check that your `Vcs-Git` is correct in `debian/control`. Consider deleting or pruning `debian/gbp.conf`, since it isn’t used by dgit, tag2upload, or git-debrebase. ##### git merge Add a note to `debian/changelog` about the git packaging change. ##### git-debrebase `git-debrebase new-upstream` will have added a “new upstream version” stanza to `debian/changelog`. Edit that so that it instead describes the packaging change. (Don’t remove the `+git` from the upstream version number there!) ##### ## Configure Salsa Merge Requests ##### git-debrebase In “Settings” / “Merge requests”, change “Squash commits when merging” to “Do not allow”. > ###### rationale > > Squashing could destroy your carefully-curated delta queue. It would also disrupt git-debrebase’s git branch structure. ##### ## Set up Salsa CI, and use it to block merges of bad changes ### Caveat - the tradeoff gitlab is a giant pile of enterprise crap. It is full of startling bugs, many of which reveal a fundamentally broken design. It is only barely Free Software in practice for Debian (in the sense that we are very reluctant to try to modify it). The constant-churn development approach and open-core business model are serious problems. It’s very slow (and resource-intensive). It can be depressingly unreliable. That Salsa works as well as it does is a testament to the dedication of the Debian Salsa team (and those who support them, including DSA). However, I have found that despite these problems, Salsa CI is well worth the trouble. Yes, there are frustrating days when work is blocked because gitlab CI is broken and/or one has to keep mashing “Retry”. But, the upside is no longer having to remember to run tests, track which of my multiple dev branches tests have passed on, and so on. Automatic tests on Merge Requests are a great way of reducing maintainer review burden for external contributions, and helping uphold quality norms within a team. They’re a great boon for the lazy solo programmer. The bottom line is that I absolutely love it when the computer thoroughly checks my work. This is tremendously freeing, precisely at the point when one most needs it — deep in the code. If the price is to occasionally be blocked by a confused (or broken) computer, so be it. ### Setup procedure Create `debian/salsa-ci.yml` containing include: - https://salsa.debian.org/salsa-ci-team/pipeline/raw/master/recipes/debian.yml In your Salsa repository, under “Settings” / “CI/CD”, expand “General Pipelines” and set “CI/CD configuration file” to `debian/salsa-ci.yml`. > ###### rationale > > Your project may have an upstream CI config in `.gitlab-ci.yml`. But you probably want to run the Debian Salsa CI jobs. > > You can add various extra configuration to `debian/salsa-ci.yml` to customise it. Consult the Salsa CI docs. ##### git-debrebase Add to `debian/salsa-ci.yml`: .git-debrebase-prepare: &git-debrebase-prepare # install the tools we'll need - apt-get update - apt-get --yes install git-debrebase git-debpush # git-debrebase needs git user setup - git config user.email "salsa-ci@invalid.invalid" - git config user.name "salsa-ci" # run git-debrebase make-patches # https://salsa.debian.org/salsa-ci-team/pipeline/-/issues/371 - git-debrebase --force - git-debrebase make-patches # make an orig tarball using the upstream tag, not a gbp upstream/ tag # https://salsa.debian.org/salsa-ci-team/pipeline/-/issues/541 - git-deborig .build-definition: &build-definition extends: .build-definition-common before_script: *git-debrebase-prepare build source: extends: .build-source-only before_script: *git-debrebase-prepare variables: # disable shallow cloning of git repository. This is needed for git-debrebase GIT_DEPTH: 0 > ###### rationale > > Unfortunately the Salsa CI pipeline currently lacks proper support for git-debrebase (salsa-ci#371) and has trouble directly using upstream git for orig tarballs (#salsa-ci#541). > > These runes were based on those in the Xen package. You should subscribe to the tickets #371 and #541 so that you can replace the clone-and-hack when proper support is merged. ##### Push this to salsa and make the CI pass. If you configured the pipeline filename after your last push, you will need to explicitly start the first CI run. That’s in “Pipelines”: press “New pipeline” in the top right. The defaults will very probably be correct. ### Block untested pushes, preventing regressions In your project on Salsa, go into “Settings” / “Repository”. In the section “Branch rules”, use “Add branch rule”. Select the branch `master`. Set “Allowed to merge” to “Maintainers”. Set “Allowed to push and merge” to “No one”. Leave “Allow force push” disabled. This means that the only way to land _anything_ on your mainline is via a Merge Request. When you make a Merge Request, gitlab will offer “Set to auto-merge”. Use that. gitlab won’t normally merge an MR unless CI passes, although you can override this on a per-MR basis if you need to. (Sometimes, immediately after creating a merge request in gitlab, you will see a plain “Merge” button. This is a bug. Don’t press that. Reload the page so that “Set to auto-merge” appears.) ### autopkgtests Ideally, your package would have meaningful autopkgtests (DEP-8 tests) This makes Salsa CI more useful for you, and also helps detect and defend you against regressions in your dependencies. The Debian CI docs are a good starting point. In-depth discussion of writing autopkgtests is beyond the scope of this article. # Day-to-day work With this capable tooling, most tasks are much easier. ## Making changes to the package Make all changes via a Salsa Merge Request. So start by making a branch that will become the MR branch. On your MR branch you can freely edit every file. This includes upstream files, and files in `debian/`. For example, you can: * Make changes with your editor and commit them. * `git cherry-pick` an upstream commit. * `git am` a patch from a mailing list or from the Debian Bug System. * `git revert` an earlier commit, even an upstream one. When you have a working state of things, tidy up your git branch: ##### git merge Use `git-rebase` to squash/edit/combine/reorder commits. ##### git-debrebase Use `git-debrebase -i` to squash/edit/combine/reorder commits. When you are happy, run `git-debrebase conclude`. **Do not edit debian/patches/**. With git-debrebase, this is purely an output. Edit the upstream files directly instead. To reorganise/maintain the patch queue, use `git-debrebase -i` to edit the actual commits. ##### Push the MR branch (topic branch) to Salsa and make a Merge Request. Set the MR to “auto-merge when all checks pass”. (Or, depending on your team policy, you could ask for an MR Review of course.) If CI fails, fix up the MR branch, squash/tidy it again, force push the MR branch, and once again set it to auto-merge. ## Test build An informal test build can be done like this: apt-get build-dep . dpkg-buildpackage -uc -b Ideally this will leave `git status` clean, with no modified or un-ignored untracked files. If it shows untracked files, add them to `.gitignore` or `debian/.gitignore` as applicable. If it dirties the tree, consider trying to make it stop doing that. The easiest way is probably to build out-of-tree, if supported upstream. If this is too difficult, you can leave the messy build arrangements as they are, but you’ll need to be disciplined about always committing, using git clean and git reset, and so on. For formal binaries builds, including for testing, use `dgit sbuild` as described below for uploading to NEW. ## Uploading to Debian Start an MR branch for the administrative changes for the release. Document all the changes you’re going to release, in the `debian/changelog`. ##### git merge gbp dch can help write the changelog for you: dgit fetch sid gbp dch --ignore-branch --since=dgit/dgit/sid --git-log=^upstream/main > ###### rationale > > `--ignore-branch` is needed because gbp dch wrongly thinks you ought to be running this on `master`, but of course you’re running it on your MR branch. > > The `--git-log=^upstream/main` excludes all upstream commits from the listing used to generate the changelog. (I’m assuming you have an `upstream` remote and that you’re basing your work on their `main` branch.) If there was a new upstream version, you’ll usually want to write a single line about that, and perhaps summarise anything really important. (For the first upload after switching to using tag2upload or dgit you need `--since=debian/1.2.3-1`, where `1.2.3-1` is your previous DEP-14 tag, because `dgit/dgit/sid` will be a dsc import, not your actual history.) ##### Change `UNRELEASED` to the target suite, and finalise the changelog. (Note that `dch` will insist that you at least save the file in your editor.) dch -r git commit -m 'Finalise for upload' debian/changelog Make an MR of these administrative changes, and merge it. (Either set it to auto-merge and wait for CI, or if you’re in a hurry double-check that it really is just a changelog update so that you can be confident about telling Salsa to “Merge unverified changes”.) Now you can perform the actual upload: git checkout master git pull --ff-only # bring the gitlab-made MR merge commit into your local tree ##### git merge git-debpush ##### git-debrebase git-debpush --quilt=linear `--quilt=linear` is needed only the first time, but it is very important that first time, to tell the system the correct git branch layout. ##### ## Uploading a NEW package to Debian If your package is NEW (completely new source, or has new binary packages) you can’t do a source-only upload. You have to build the source and binary packages locally, and upload those build artifacts. Happily, given the same git branch you’d tag for tag2upload, and assuming you have sbuild installed and a suitable chroot, `dgit` can help take care of the build and upload for you: Prepare the changelog update and merge it, as above. Then: ##### git-debrebase Create the orig tarball and launder the git-derebase branch: git-deborig git-debrebase quick > ###### rationale > > Source package format 3.0 (quilt), which is what I’m recommending here for use with git-debrebase, needs an orig tarball; it would also be needed for 1.0-with-diff. ##### Build the source and binary packages, locally: dgit sbuild dgit push-built > ###### rationale > > You don’t _have to_ use `dgit sbuild`, but it is usually convenient to do so, because unlike sbuild, dgit understands git. Also it works around a gitignore-related defect in dpkg-source. ## New upstream version Find the new upstream version number and corresponding tag. (Let’s suppose it’s `1.2.4`.) Check the provenance: git verify-tag v1.2.4 > ###### rationale > > Not all upstreams sign their git tags, sadly. Sometimes encouraging them to do so can help. You may need to use some other method(s) to check that you have the right git commit for the release. ##### git merge Simply merge the new upstream version and update the changelog: git merge v1.2.4 dch -v1.2.4-1 'New upstream release.' ##### git-debrebase Rebase your delta queue onto the new upstream version: git debrebase mew-upstream 1.2.4 ##### If there are conflicts between your Debian delta for 1.2.3, and the upstream changes in 1.2.4, this is when you need to resolve them, as part of `git merge` or `git (deb)rebase`. After you’ve completed the merge, test your package and make any further needed changes. When you have it working in a local branch, make a Merge Request, as above. ## Sponsorship git-based sponsorship is super easy! The sponsee can maintain their git branch on Salsa, and do all normal maintenance via gitlab operations. When the time comes to upload, the sponsee notifies the sponsor that it’s time. The sponsor fetches and checks out the git branch from Salsa, does their checks, as they judge appropriate, and when satisfied runs `git-debpush`. As part of the sponsor’s checks, they might want to see all changes since the last upload to Debian: dgit fetch sid git diff dgit/dgit/sid..HEAD Or to see the Debian delta of the proposed upload: git verify-tag v1.2.3 git diff v1.2.3..HEAD ':!debian' ##### git-debrebase Or to show all the delta as a series of commits: git log -p v1.2.3..HEAD ':!debian' Don’t look at `debian/patches/`. It can be absent or out of date. ##### ## Incorporating an NMU Fetch the NMU into your local git, and see what it contains: dgit fetch sid git diff master...dgit/dgit/sid If the NMUer used dgit, then `git log dgit/dgit/sid` will show you the commits they made. Normally the best thing to do is to simply merge the NMU, and then do any reverts or rework in followup commits: git merge dgit/dgit/sid ##### git-debrebase You should `git-debrebase quick` at this stage, to check that the merge went OK and the package still has a lineariseable delta queue. ##### Then make any followup changes that seem appropriate. Supposing your previous maintainer upload was `1.2.3-7`, you can go back and see the NMU diff again with: git diff debian/1.2.3-7...dgit/dgit/sid ##### git-debrebase The actual changes made to upstream files will always show up as diff hunks to those files. diff commands will often also show you changes to `debian/patches/`. Normally it’s best to filter them out with `git diff ... ':!debian/patches'` If you’d prefer to read the changes to the delta queue as an interdiff (diff of diffs), you can do something like git checkout debian/1.2.3-7 git-debrebase --force make-patches git diff HEAD...dgit/dgit/sid -- :debian/patches to diff against a version with `debian/patches/` up to date. (The NMU, in `dgit/dgit/sid`, will necessarily have the patches already up to date.) ##### ## DFSG filtering (handling non-free files) Some upstreams ship non-free files of one kind of another. Often these are just in the tarballs, in which case basing your work on upstream git avoids the problem. But if the files are in upstream’s git trees, you need to filter them out. **This advice is not for (legally or otherwise) dangerous files**. If your package contains files that may be illegal, or hazardous, you need much more serious measures. In this case, even pushing the upstream git history to any Debian service, including Salsa, must be avoided. If you suspect this situation you should seek advice, privately and as soon as possible, from dgit-owner@d.o and/or the DFSG team. Thankfully, legally dangerous files are very rare in upstream git repositories, for obvious reasons. Our approach is to make a filtered git branch, based on the upstream history, with the troublesome files removed. We then treat that as the upstream for all of the rest of our work. > ###### rationale > > Yes, this will end up including the non-free files in the git history, on official Debian servers. That’s OK. What’s forbidden is non-free material in the Debianised git tree, or in the source packages. ### Initial filtering git checkout -b upstream-dfsg v1.2.3 git rm nonfree.exe git commit -m "upstream version 1.2.3 DFSG-cleaned" git tag -s -m "upstream version 1.2.3 DFSG-cleaned" v1.2.3+ds1 git push origin upstream-dfsg And now, use `1.2.3+ds1`, and the filtered branch `upstream-dfsg`, as the upstream version, instead of `1.2.3` and `upstream/main`. Follow the steps for Convert the git branch or New upstream version, as applicable, adding `+ds1` into `debian/changelog`. If you missed something and need to filter out more a nonfree files, re-use the same `upstream-dfsg` branch and bump the `ds` version, eg `v1.2.3+ds2`. ### Subsequent upstream releases git checkout upstream-dfsg git merge v1.2.4 git rm additional-nonfree.exe # if any git commit -m "upstream version 1.2.4 DFSG-cleaned" git tag -s -m "upstream version 1.2.4 DFSG-cleaned" v1.2.4+ds1 git push origin upstream-dfsg ### Removing files by pattern If the files you need to remove keep changing, you could automate things with a small shell script `debian/rm-nonfree` containing appropriate `git rm` commands. If you use `git rm -f` it will succeed even if the `git merge` from real upstream has conflicts due to changes to non-free files. > ###### rationale > > Ideally `uscan`, which has a way of representing DFSG filtering patterns in `debian/watch`, would be able to do this, but sadly the relevant functionality is entangled with uscan’s tarball generation. # Common issues * **Tarball contents** : If you are switching from upstream tarballs to upstream git, you may find that the git tree is significantly different. It may be missing files that your current build system relies on. If so, you definitely want to be using git, not the tarball. Those extra files in the tarball are intermediate built products, but in Debian we should be building from the real source! Fixing this may involve some work, though. * **gitattributes** : For Reasons the dgit and tag2upload system disregards and disables the use of `.gitattributes` to modify files as they are checked out. Normally this doesn’t cause a problem so long as any orig tarballs are generated the same way (as they will be by tag2upload or `git-deborig`). But if the package or build system relies on them, you may need to institute some workarounds, or, replicate the effect of the gitattributes as commits in git. * **git submodules** : git submodules are terrible and should never ever be used. But not everyone has got the message, so your upstream may be using them. If you’re lucky, the code in the submodule isn’t used in which case you can `git rm` the submodule. # Further reading I’ve tried to cover the most common situations. But software is complicated and there are many exceptions that this article can’t cover without becoming much harder to read. You may want to look at: * **dgit workflow manpages** : As part of the git transition project, we have written workflow manpages, which are more comprehensive than this article. They’re centered around use of dgit, but also discuss tag2upload where applicable. These cover a much wider range of possibilities, including (for example) choosing different source package formats, how to handle upstreams that publish only tarballs, etc. They are correspondingly much less opinionated. Look in dgit-maint-merge(7) and dgit-maint-debrebase(7). There is also dgit-maint-gbp(7) for those who want to keep using `gbp pq` and/or `quilt` with a patches-unapplied branch. * **NMUs** are very easy with dgit. (tag2upload is usually less suitable than dgit, for an NMU.) You can work with any package, in git, in a completely uniform way, regardless of maintainer git workflow, See dgit-nmu-simple(7). * **Native packages** (meaning packages maintained wholly within Debian) are much simpler. See dgit-maint-native(7). * **tag2upload documentation** : The tag2upload wiki page is a good starting point. There’s the git-debpush(1) manpage of course. * **dgit reference documentation** : There is a comprehensive command-line manual in dgit(1). Description of the dgit data model and Principles of Operation is in dgit(7); including coverage of out-of-course situations. dgit is a complex and powerful program so this reference material can be overwhelming. So, we recommend starting with a guide like this one, or the dgit-…(7) workflow tutorials. * **Design and implementation documentation for tag2upload** is linked to from the wiki. * **Debian’s git transition** blog post from December. tag2upload and dgit are part of the git transition project, and aim to support a very wide variety of git workflows. tag2upload and dgit work well with existing git tooling, including git-buildpackage-based approaches. git-debrebase is conceptually separate from, and functionally independent of, tag2upload and dgit. It’s a git workflow and delta management tool, competing with `gbp pq`, manual use of `quilt`, `git-dpm` and so on. ##### git-debrebase * **git-debrebase reference documentation** : Of course there’s a comprehensive command-line manual in git-debrebase(1). git-debrebase is quick and easy to use, but it has a complex data model and sophisticated algorithms. This is documented in git-debrebase(5). ##### comments
diziet.dreamwidth.org
February 15, 2026 at 2:07 PM
Bits from Debian: DebConf 26 Registration and Call for Proposals are open
Registration and the Call for Proposals for DebConf 26 are now open. The 27th edition of the Debian annual conference will be held from _July 20th to July 25th, 2026, in Santa Fe, Argentina._ The conference days will be preceded by DebCamp, which will take place from July 13th to July 19th, 2026. The registration form can be accessed on the DebConf 26 website. After creating an account, click "register" in the profile section. As always, basic registration for DebConf is free of charge for attendees. If you are attending the conference in a professional capacity or as a representative of your company, we kindly ask that you consider registering in one of our paid categories to help cover the costs of organizing the conference and to support subsidizing other community members. The last day to register with guaranteed swag is June 14th. We also encourage eligible individuals to apply for a diversity bursary. Travel, food, and accommodation bursaries are also available. More details can be found on the bursary info page. The last day to apply for a bursary is April 1st. Applicants should receive feedback on their bursary application by May 1st. ## Call for proposals The call for proposals for talks, discussions and other activities is also open. To submit a proposal you need to create an account on the website, and then use the "Submit Talk" button in the profile section. The last day to submit and have your proposal be considered for the main conference schedule, with video coverage guaranteed, is April 1st. ## Become a sponsor DebConf 26 is also accepting sponsors. Interested companies and organizations may contact the DebConf team through sponsors@debconf.org or visit the DebConf 26 website. See you in Santa Fe, The DebConf 26 Team
bits.debian.org
February 14, 2026 at 4:05 PM
Erich Schubert: Dogfood Generative AI
Current AI companies **ignore licenses** such as the GPL, and often train on anything they can scrape. This is not acceptable. The AI companies **ignore web conventions** , e.g., they deep link images from your web sites (even adding `?utm_source=chatgpt.com` to image URIs, I suggest that you return 403 on these requests), but do not direct visitors to your site. You do not get a reliable way of opting out from generative AI training or use. For example, the only way to prevent your contents from being used in “Google AI Overviews” is to use `data-nosnippet` and cripple the snippet preview in Google. The “AI” browsers such as Comet, Atlas do not _identify_ as such, but rather pretend they are standard Chromium. There is no way to ban such AI use on your web site. Generative AI overall is flooding the internet with garbage. It was estimated that 1/3rd of the content uploaded to YouTube is by now AI generated. This includes the same “veteran stories” crap in thousands of variants as well as brainrot content (that at least does not pretend to be authentic), some of which is among the most viewed recent uploads. Hence, these **platforms even _benefit_ from the AI slop**. And don’t blame the “creators” – because you can currently earn a decent amount of money from such contents, people will generate brainrot content. If you have recently tried to find honest reviews of products you considered buying, you will have noticed thousands of sites with AI generated fake product reviews, that all are financed by Amazon PartnerNet commissions. Often with hilarious nonsense such as recommending “sewing thread with German instructions” as tool for repairing a sewing machine. And on Amazon, there are plenty of AI generated product reviews – the use of emoji is a strong hint. And if you leave a negative product review, there is a chance they offer you a refund to get rid of it… And the majority of SPAM that gets through my filters is by now sent via Gmail and Amazon SES. Partially because of GenAI, StackOverflow is pretty much dead – which used to be one of the most valuable programming resources. (While a lot of people complain about moderation, famous moderator Shog9 from the early SO days suggested that a change in Google’s ranking is also to blame, as it began favoring showing “new” content over the existing answered questions – causing more and more duplicates to be posted because people no longer found the existing good answers. In January 2026, there were around 3400 questions and 6000 answers posted, less than in the _first_ month of SO of August 2008 (before the official launch). Many open-source projects are suffering in many ways, e.g., false bug reports that caused curl to stop its bug bounty program. Wikipedia is also suffering badly from GenAI. Science is also flooded with poor AI generated papers, often reviewed with help from AI. This is largely due to bad incentives – to graduate, you are expected to write many papers on certain “A” conferences, such as NeurIPS. On these conferences the number of submissions is growing insane, and the review quality plummets. All to often, the references in these papers are hallucinated, too; and libraries complain that they receive more and more requests to locate literature that does not appear to exist. However, the worst effect (at least to me as an educator) is the **noskilling effect** (a rather novel term derived from deskilling, I have only seen it in this article by Weßels and Maibaum). Instead of acquiring skills (writing, reading, summarizing, programming) by practising, too many people now outsource all this to AI, leading to them not learn the basics necessary to advance to a higher skill level. In my impression, this effect is _dramatic_. It is even worse than _deskilling_ , as it does not mean losing an advanced skill that you apparently can replace, but often means not acquiring basic skills in the first place. And the earlier pupils start using generative AI, the less skills they acquire. ## Dogfood the AI Let’s **dogfood the AI**. Here’s an outline: 1. Get a list of programming topics, e.g., get a list of algorithms from Wikidata, get a StackOverflow data dump. 2. Generate flawed code examples for the algorithms / programming questions, maybe generate blog posts, too. You do not need a high-quality model for this. Use something you can run locally or access for free. 3. Date everything back in time, remove typical indications of AI use. 4. Upload to Github, because Microsoft will feed this to OpenAI… Here is an example prompt that you can use: You are a university educator, preparing homework assignments in debugging. The programming language used is {lang}. The students are tasked to find bugs in given code. Do not just call existing implementations from libraries, but implement the algorithm from scratch. Make sure there are two mistakes in the code that need to be discovered by the students. Do NOT repeat instructions. Do NOT add small-talk. Do NOT provide a solution. The code may have (misleading) comments, but must NOT mention the bugs. If you do not know how to implement the algorithm, output an empty response. Output only the code for the assignment! Do not use markdown. Begin with a code comment that indicates the algorithm name and idea. If you indicate a bug, always use a comment with the keyword BUG Generate a {lang} implementation (with bugs) of: {n} ({desc}) Remember to remove the BUG comments! If you pick some slighly less common programming languages (by quantity of available code, say Go or Rust) you have higher chances that this gets into the training data. If many of us do this, we can feed GenAI its own garbage. If we generate thousands of bad code examples, this will poison their training data, and may eventually lead to an effect known as “model collapse”. On the long run, we need to get back to an internet for people, not an internet for bots. Some kind of “internet 2.0”, but I do not have a clear vision on how to keep AI out – if AI can train on it, they will. And someone will copy and paste the AI generated crap back into whatever system we built. Hence I don’t think technology is the answere here, but human networks of trust.
www.vitavonni.de
February 13, 2026 at 12:01 PM
Dirk Eddelbuettel: RcppSpdlog 0.0.27 on CRAN: C++20 Accommodations
Version 0.0.27 of RcppSpdlog arrived on CRAN moments ago, and will be uploaded to Debian and built for r2u shortly. The (nice) documentation site will be refreshed too. RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. You can learn more at the nice package documention site. Brian Ripley has now turned C++20 on as a default for R-devel (aka R 4.6.0 ‘to be’), and this turned up misbehvior in packages using RcppSpdlog such as our spdl wrapper (offering a nicer interface from both R and C++) when relying on `std::format`. So for now, we turned this off and remain with `fmt::format` from the fmt library while we investigate further. The NEWS entry for this release follows. > #### Changes in RcppSpdlog version 0.0.27 (2026-02-11) > > * Under C++20 or later, keep relying on `fmt::format` until issues experienced using `std::format` can be identified and resolved > Courtesy of my CRANberries, there is also a diffstat report detailing changes. More detailed information is on the RcppSpdlog page, or the package documention site. This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.
dirk.eddelbuettel.com
February 12, 2026 at 3:58 PM
Freexian Collaborators: Debian Contributions: cross building, rebootstrap updates, Refresh of the patch tagging guidelines and more! (by Anupa Ann Joseph)
# Debian Contributions: 2026-01 Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services. ## cross building, by Helmut Grohne In version 1.10.1, Meson merged a patch to make it call the correct `g-ir-scanner` by default thanks to Eli Schwarz. This problem affected more than 130 source packages. Helmut retried building them all and filed 69 patches as a result. A significant portion of those packages require another Meson change to call the correct `vapigen`. Another notable change is converting gnu-efi to multiarch, which ended up requiring changes to a number of other packages. Since Aurelien dropped the `libcrypt-dev` dependency from `libc6-dev`, this transition now is mostly complete and has resulted in most of the Perl ecosystem correctly expressing `perl-xs-dev` dependencies needed for cross building. It is these infrastructure changes affecting several client packages that this work targets. As a result of this continued work, about 66% of Debian’s source packages now have satisfiable cross Build-Depends in unstable and about 10000 (55%) actually can be cross built. There are now more than 500 open bug reports affecting more than 2000 packages most of which carry patches. ## rebootstrap, by Helmut Grohne Maintaining architecture cross-bootstrap requires continued effort for adapting to archive changes such as `glib2.0` dropping a build profile or an `e2fsprogs` FTBFS. Beyond those generic problems, architecture-specific problems with e.g. `musl-linux-any` or `sparc` may arise. While all these changes move things forward on the surface, the bootstrap tooling has become a growing pile of patches. Helmut managed to upstream two changes to `glibc` for reducing its `Build-Depends` in the `stage2` build profile and thanks Aurelien Jarno. ## Refresh of the patch tagging guidelines, by Raphaël Hertzog Debian Enhancement Proposal #3 (DEP-3) is named “Patch Tagging Guidelines” and standardizes meta-information that Debian contributors can put in patches included in Debian source packages. With the feedback received over the years, and with the change in the package management landscape, the need to refresh those guidelines became evident. As the initial driver of that DEP, I spent a good day reviewing all the feedback (that I kept in a folder) and producing a new version of the document. The changes aim to give more weight to the syntax that is compatible with git format-patch’s output, and also to clarify the expected uses and meanings of a couple of fields, including some algorithm that parsers should follow to define the state of the patch. After the announcement of the new draft on debian-devel, the revised DEP-3 received a significant number of comments that I still have to process. ## Miscellaneous contributions * Helmut uploaded `debvm` making it work with unstable as a target distribution again. * Helmut modernized the code base backing dedup.debian.net significantly expanding the support for type checking. * Helmut fixed the multiarch hinter once more given feedback from Fabian Grünbichler. * Helmut worked on migrating the `rocblas` package to forky. * Raphaël fixed RC bug #1111812 in `publican` and did some maintenance for tracker.debian.org. * Carles added support in the `festival` Debian package for systemd socket activation and systemd service and socket units. Adapted the patch for upstream and created a merge request (also fixed a MacOS X building system error while working on it). Updated Orca Wiki documentation regarding festival. Discussed a 2007 bug/feature in festival which allowed having a local shell and that the new systemd socket activation has the same code path. * Carles using po-debconf-manager worked on Catalan translations: 7 reviewed and sent; 5 follow ups, 5 deleted packages. * Carls made some po-debconf-manager changes: now it attaches the translation file on follow ups, fixed bullseye compatibility issues. * Carles reviewed a new Catalan apt translation. * Carles investigated and reported a lxhotkey bug and sent a patch for the “`abcde`” package. * Carles made minor updates for Debian Wiki for different pages (lxde for dead keys, Ripping with abcde troubleshooting, VirtualBox troubleshooting). * Stefano renamed build-details.json in Python 3.14 to fix multiarch coinstallability. * Stefano audited the tooling and ignore lists for checking the contents of the python3.X-minimal packages, finding and fixing some issues in the process. * Stefano made a few uploads of `python3-defaults` and `dh-python` in support of Python 3.14-as-default in Ubuntu. Also investigated the risk of ignoring byte-compilation failures by default, and started down the road of implementing this. * Stefano did some sysadmin work on debian.social infrastructure. * Stefano and Santiago worked on preparations for DebConf 26. Especially to help the local team on opening the registration, and reviewing the budget to be presented for approval. * Stefano uploaded routine updates of `python-virtualenv` and `python-flexmock`. * Antonio collaborated with DSA on enabling a new proxy for salsa to prevent scrapers from taking the service down. * Antonio did miscellaneous salsa administrative tasks. * Antonio fixed a few Ruby packages towards the Ruby 3.4 transition. * Antonio started work on planned improvements to the DebConf registration system. * Santiago prepared unstable updates for the latest upstream versions of knot-dns and knot-resolver. The authoritative DNS server and DNS resolver software developed by CZ.NIC. It is worth highlighting that, given the separation of functionality compared to other implementations, `knot-dns` and `knot-resolver` are also less complex software, which results in advantages in terms of security: only three CVEs have been reported for knot-dns since 2011). * Santiago made some routine reviews of merge requests proposed for the Salsa CI’s pipeline. E.g. a proposal to fix how sbuild chooses the chroot when building a package for experimental. * Colin fixed lots of Python packages to handle Python 3.14 and to avoid using the deprecated `pkg_resources` module. * Colin added forky support to the images used in Salsa CI pipelines. * Colin began working on getting a release candidate of `groff 1.24.0` (the first upstream release since mid-2023, so a very large set of changes) into experimental. * Lucas kept working on the preparation for Ruby 3.4 transition. Some packages fixed (support build against Ruby 3.3 and 3.4): `ruby-rbpdf`, `jekyll`, `origami-pdf`, `ruby-kdl`, `ruby-twitter`, `ruby-twitter-text`, `ruby-globalid`. * Lucas supported some potential mentors in the Google Summer of Code 26 program to submit their projects. * Anupa worked on the point release announcements for Debian 12.13 and 13.3 from the Debian publicity team side. * Anupa attended the publicity team meeting to discuss the team activities and to plan an online sprint in February. * Anupa attended meetings with the Debian India team to plan and coordinate the MinDebConf Kanpur and sent out related Micronews. * Emilio coordinated various transitions and helped get rid of llvm-toolchain-17 from sid.
www.freexian.com
February 12, 2026 at 9:58 AM
Freexian Collaborators: Writing a new worker task for Debusine (by Carles Pina i Estany)
Debusine is a tool designed for Debian developers and Operating System developers in general. You can try out Debusine on debusine.debian.net, and follow its development on salsa.debian.org. This post describes how to write a new worker task for Debusine. It can be used to add tasks to a self-hosted Debusine instance, or to submit to the Debusine project new tasks to add new capabilities to Debusine. Tasks are the lower-level pieces of Debusine workflows. Examples of tasks are Sbuild, Lintian, Debdiff (see the available tasks). This post will document the steps to write a new basic worker task. The example will add a worker task that runs reprotest and creates an artifact of the new type `ReprotestArtifact` with the reprotest log. Tasks are usually used by workflows. Workflows solve high-level goals by creating and orchestrating different tasks (e.g. a Sbuild workflow would create different Sbuild tasks, one for each architecture). ## Overview of tasks A task usually does the following: * It receives structured data defining its input artifacts and configuration * Input artifacts are downloaded * A process is run by the worker (e.g. `lintian`, `debdiff`, etc.). In this blog post, it will run `reprotest` * The output (files, logs, exit code, etc.) is analyzed, artifacts and relations might be generated, and the work request is marked as completed, either with `Success` or `Failure` If you want to follow the tutorial and add the `Reprotest` task, your Debusine development instance should have at least one worker, one user, a debusine client set up, and permissions for the client to create tasks. All of this can be setup following the steps in the Contribute section of the documentation. This blog post shows a functional `Reprotest` task. This task is not currently part of Debusine. The Reprotest task implementation is simplified (no error handling, unit tests, specific view, docs, some shortcuts in the environment preparation, etc.). At some point, in Debusine, we might add a `debrebuild` task which is based on buildinfo files and uses snapshot.debian.org to recreate the binary packages. ## Defining the inputs of the task The input of the reprotest task will be a source artifact (a Debian source package). We model the input with pydantic in `debusine/tasks/models.py`: class ReprotestData(BaseTaskDataWithExecutor): """Data for Reprotest task.""" source_artifact: LookupSingle class ReprotestDynamicData(BaseDynamicTaskDataWithExecutor): """Reprotest dynamic data.""" source_artifact_id: int | None = None The `ReprotestData` is what the user will input. A `LookupSingle` is a lookup that resolves to a single artifact. We would also have configuration for the desired `variations` to test, but we have left that out of this example for simplicity. Configuring variations is left as an exercise for the reader. Since `ReprotestData` is a subclass of `BaseTaskDataWithExecutor` it also contains `environment` where the user can specify in which environment the task will run. The environment is an artifact with a Debian image. The `ReprotestDynamicData` holds the resolution of all lookups. These can be seen in the “Internals” tab of the work request view. ## Add the new `Reprotest` artifact data class In order for the reprotest task to create a new Artifact of the type `DebianReprotest` with the log and output metadata: add the new category to `ArtifactCategory` in `debusine/artifacts/models.py`: REPROTEST = "debian:reprotest" In the same file add the `DebianReprotest` class: class DebianReprotest(ArtifactData): """Data for debian:reprotest artifacts.""" reproducible: bool | None = None def get_label(self) -> str: """Return a short human-readable label for the artifact.""" return "reprotest analysis" It could also include the package name or version. In order to have the category listed in the work request output artifacts table, edit the file `debusine/db/models/artifacts.py`: In `ARTIFACT_CATEGORY_ICON_NAMES` add `ArtifactCategory.REPROTEST: "folder",` and in `ARTIFACT_CATEGORY_SHORT_NAMES` add `ArtifactCategory.REPROTEST: "reprotest",`. ## Create the new Task class In `debusine/tasks/` create a new file `reprotest.py`. reprotest.py # Copyright © The Debusine Developers # See the AUTHORS file at the top-level directory of this distribution # # This file is part of Debusine. It is subject to the license terms # in the LICENSE file found in the top-level directory of this # distribution. No part of Debusine, including this file, may be copied, # modified, propagated, or distributed except according to the terms # contained in the LICENSE file. """Task to use reprotest in debusine.""" from pathlib import Path from typing import Any from debusine import utils from debusine.artifacts.local_artifact import ReprotestArtifact from debusine.artifacts.models import ( ArtifactCategory, CollectionCategory, DebianSourcePackage, DebianUpload, WorkRequestResults, get_source_package_name, get_source_package_version, ) from debusine.client.models import RelationType from debusine.tasks import BaseTaskWithExecutor, RunCommandTask from debusine.tasks.models import ReprotestData, ReprotestDynamicData from debusine.tasks.server import TaskDatabaseInterface class Reprotest( RunCommandTask[ReprotestData, ReprotestDynamicData], BaseTaskWithExecutor[ReprotestData, ReprotestDynamicData], ): """Task to use reprotest in debusine.""" TASK_VERSION = 1 CAPTURE_OUTPUT_FILENAME = "reprotest.log" def __init__( self, task_data: dict[str, Any], dynamic_task_data: dict[str, Any] | None = None, ) -> None: """Initialize object.""" super().__init__(task_data, dynamic_task_data) self._reprotest_target: Path | None = None def build_dynamic_data( self, task_database: TaskDatabaseInterface ) -> ReprotestDynamicData: """Compute and return ReprotestDynamicData.""" input_source_artifact = task_database.lookup_single_artifact( self.data.source_artifact ) assert input_source_artifact is not None self.ensure_artifact_categories( configuration_key="input.source_artifact", category=input_source_artifact.category, expected=( ArtifactCategory.SOURCE_PACKAGE, ArtifactCategory.UPLOAD, ), ) assert isinstance( input_source_artifact.data, (DebianSourcePackage, DebianUpload) ) subject = get_source_package_name(input_source_artifact.data) version = get_source_package_version(input_source_artifact.data) assert self.data.environment is not None environment = self.get_environment( task_database, self.data.environment, default_category=CollectionCategory.ENVIRONMENTS, ) return ReprotestDynamicData( source_artifact_id=input_source_artifact.id, subject=subject, parameter_summary=f"{subject}_{version}", environment_id=environment.id, ) def get_input_artifacts_ids(self) -> list[int]: """Return the list of input artifact IDs used by this task.""" if not self.dynamic_data: return [] return [ self.dynamic_data.source_artifact_id, self.dynamic_data.environment_id, ] def fetch_input(self, destination: Path) -> bool: """Download the required artifacts.""" assert self.dynamic_data artifact_id = self.dynamic_data.source_artifact_id assert artifact_id is not None self.fetch_artifact(artifact_id, destination) return True def configure_for_execution(self, download_directory: Path) -> bool: """ Find a .dsc in download_directory. Install reprotest and other utilities used in _cmdline. Set self._reprotest_target to it. :param download_directory: where to search the files :return: True if valid files were found """ self._prepare_executor_instance() if self.executor_instance is None: raise AssertionError("self.executor_instance cannot be None") self.run_executor_command( ["apt-get", "update"], log_filename="install.log", run_as_root=True, check=True, ) self.run_executor_command( [ "apt-get", "--yes", "--no-install-recommends", "install", "reprotest", "dpkg-dev", "devscripts", "equivs", "sudo", ], log_filename="install.log", run_as_root=True, ) self._reprotest_target = utils.find_file_suffixes( download_directory, [".dsc"] ) return True def _cmdline(self) -> list[str]: """ Build the reprotest command line. Use configuration of self.data and self._reprotest_target. """ target = self._reprotest_target assert target is not None cmd = [ "bash", "-c", f"TMPDIR=/tmp ; cd /tmp ; dpkg-source -x {target} package/; " "cd package/ ; mk-build-deps ; apt-get install --yes ./*.deb ; " "rm *.deb ; " "reprotest --vary=-time,-user_group,-fileordering,-domain_host .", ] return cmd @staticmethod def _cmdline_as_root() -> bool: r"""apt-get install --yes ./\*.deb must be run as root.""" return True def task_result( self, returncode: int | None, execute_directory: Path, # noqa: U100 ) -> WorkRequestResults: """ Evaluate task output and return success. For a successful run of reprotest: -must have the output file -exit code is 0 :return: WorkRequestResults.SUCCESS or WorkRequestResults.FAILURE. """ reprotest_file = execute_directory / self.CAPTURE_OUTPUT_FILENAME if reprotest_file.exists() and returncode == 0: return WorkRequestResults.SUCCESS return WorkRequestResults.FAILURE def upload_artifacts( self, exec_directory: Path, *, execution_result: WorkRequestResults ) -> None: """Upload the ReprotestArtifact with the files and relationships.""" if not self.debusine: raise AssertionError("self.debusine not set") assert self.dynamic_data is not None assert self.dynamic_data.parameter_summary is not None reprotest_artifact = ReprotestArtifact.create( reprotest_output=exec_directory / self.CAPTURE_OUTPUT_FILENAME, reproducible=execution_result == WorkRequestResults.SUCCESS, package=self.dynamic_data.parameter_summary, ) uploaded = self.debusine.upload_artifact( reprotest_artifact, workspace=self.workspace_name, work_request=self.work_request_id, ) assert self.dynamic_data is not None assert self.dynamic_data.source_artifact_id is not None self.debusine.relation_create( uploaded.id, self.dynamic_data.source_artifact_id, RelationType.RELATES_TO, ) Below are the main methods with some basic explanation. In order for Debusine to discover the task, add `"Reprotest"` in the file `debusine/tasks/__init__.py` in the `__all__` list. Let’s explain the different methods of the `Reprotest` class: ### `build_dynamic_data` method The worker has no access to Debusine’s database. Lookups are all resolved before the task gets dispatched to a worker, so all it has to do is download the specified input artifacts. `build_dynamic_data` method lookup the artifact, assert that is a valid category, extract the package name and version, and get the environment in which it will be executed. The `environment` is needed to run the task (`reprotest` will run in a container using `unshare`, `incus`…). def build_dynamic_data( self, task_database: TaskDatabaseInterface ) -> ReprotestDynamicData: """Compute and return ReprotestDynamicData.""" input_source_artifact = task_database.lookup_single_artifact( self.data.source_artifact ) assert input_source_artifact is not None self.ensure_artifact_categories( configuration_key="input.source_artifact", category=input_source_artifact.category, expected=( ArtifactCategory.SOURCE_PACKAGE, ArtifactCategory.UPLOAD, ), ) assert isinstance( input_source_artifact.data, (DebianSourcePackage, DebianUpload) ) subject = get_source_package_name(input_source_artifact.data) version = get_source_package_version(input_source_artifact.data) assert self.data.environment is not None environment = self.get_environment( task_database, self.data.environment, default_category=CollectionCategory.ENVIRONMENTS, ) return ReprotestDynamicData( source_artifact_id=input_source_artifact.id, subject=subject, parameter_summary=f"{subject}_{version}", environment_id=environment.id, ) ### `get_input_artifacts_ids` method Used to list the task’s input artifacts in the web UI. def get_input_artifacts_ids(self) -> list[int]: """Return the list of input artifact IDs used by this task.""" if not self.dynamic_data: return [] assert self.dynamic_data.source_artifact_id is not None return [self.dynamic_data.source_artifact_id] ### `fetch_input` method Download the required artifacts on the worker. def fetch_input(self, destination: Path) -> bool: """Download the required artifacts.""" assert self.dynamic_data artifact_id = self.dynamic_data.source_artifact_id assert artifact_id is not None self.fetch_artifact(artifact_id, destination) return True ### `configure_for_execution` method Install the packages needed by the task and set `_reprotest_target`, which is used to build the task’s command line. def configure_for_execution(self, download_directory: Path) -> bool: """ Find a .dsc in download_directory. Install reprotest and other utilities used in _cmdline. Set self._reprotest_target to it. :param download_directory: where to search the files :return: True if valid files were found """ self._prepare_executor_instance() if self.executor_instance is None: raise AssertionError("self.executor_instance cannot be None") self.run_executor_command( ["apt-get", "update"], log_filename="install.log", run_as_root=True, check=True, ) self.run_executor_command( [ "apt-get", "--yes", "--no-install-recommends", "install", "reprotest", "dpkg-dev", "devscripts", "equivs", "sudo", ], log_filename="install.log", run_as_root=True, ) self._reprotest_target = utils.find_file_suffixes( download_directory, [".dsc"] ) return True ### `_cmdline` method Return the command line to run the task. In this case, and to keep the example simple, we will run `reprotest` directly in the worker’s executor VM/container, without giving it an isolated virtual server. So, this command installs the build dependencies required by the package (so `reprotest` can build it) and runs reprotest itself. def _cmdline(self) -> list[str]: """ Build the reprotest command line. Use configuration of self.data and self._reprotest_target. """ target = self._reprotest_target assert target is not None cmd = [ "bash", "-c", f"TMPDIR=/tmp ; cd /tmp ; dpkg-source -x {target} package/; " "cd package/ ; mk-build-deps ; apt-get install --yes ./*.deb ; " "rm *.deb ; " "reprotest --vary=-time,-user_group,-fileordering,-domain_host .", ] return cmd Some reprotest variations are disabled. This is to keep the example simple with the set of packages to install and reprotest features. ### `_cmdline_as_root` method Since during the execution it’s needed to install packages, run it as root (in the container): @staticmethod def _cmdline_as_root() -> bool: r"""apt-get install --yes ./\*.deb must be run as root.""" return True ### `task_result` method Task succeeded if a log is generated and the return code is 0. def task_result( self, returncode: int | None, execute_directory: Path, # noqa: U100 ) -> WorkRequestResults: """ Evaluate task output and return success. For a successful run of reprotest: -must have the output file -exit code is 0 :return: WorkRequestResults.SUCCESS or WorkRequestResults.FAILURE. """ reprotest_file = execute_directory / self.CAPTURE_OUTPUT_FILENAME if reprotest_file.exists() and returncode == 0: return WorkRequestResults.SUCCESS return WorkRequestResults.FAILURE ### `upload_artifacts` method Create the `ReprotestArtifact` with the log and the reproducible boolean, upload it, and then add a relation between the `ReprotestArtifact` and the source package: def upload_artifacts( self, exec_directory: Path, *, execution_result: WorkRequestResults ) -> None: """Upload the ReprotestArtifact with the files and relationships.""" if not self.debusine: raise AssertionError("self.debusine not set") assert self.dynamic_data is not None assert self.dynamic_data.parameter_summary is not None reprotest_artifact = ReprotestArtifact.create( reprotest_output=exec_directory / self.CAPTURE_OUTPUT_FILENAME, reproducible=execution_result == WorkRequestResults.SUCCESS, package=self.dynamic_data.parameter_summary, ) uploaded = self.debusine.upload_artifact( reprotest_artifact, workspace=self.workspace_name, work_request=self.work_request_id, ) assert self.dynamic_data is not None assert self.dynamic_data.source_artifact_id is not None self.debusine.relation_create( uploaded.id, self.dynamic_data.source_artifact_id, RelationType.RELATES_TO, ) ## Execution example To run this task in a local Debusine (see steps to have it ready with an environment, permissions and users created) you can do: $ python3 -m debusine.client artifact import-debian -w System http://deb.debian.org/debian/pool/main/h/hello/hello_2.10-5.dsc (get the artifact ID from the output of that command) The artifact can be seen in `http://$DEBUSINE/debusine/System/artifact/$ARTIFACTID/`. Then create a `reprotest.yaml`: $ cat <<EOF > reprotest.yaml source_artifact: $ARTIFACT_ID environment: "debian/match:codename=bookworm" EOF Instead of `debian/match:codename=bookworm` it could use the artifact ID. Finally, create the work request to run the task: $ python3 -m debusine.client create-work-request -w System reprotest --data reprotest.yaml Using Debusine web you can see the work request, which should go to `Running` status, then `Completed` with `Success` or `Failure` (depending if `reprotest` could reproduce it or not). Clicking on the `Output` tab would have an artifact of type `debian:reprotest` with one file: the log. In the `Metadata` tab of the artifact it has Data: the package name and reproducible (true or false). ## What is left to do? This was a simple example of creating a task. Other things that could be done: * unit tests * documentation * configurable `variations` * running `reprotest` directly on the worker host, using the executor environment as a `reprotest` “virtual server” * in this specific example, the command line might be doing too many things that could maybe be done by other parts of the task, such as `prepare_environment`. * integrate it in a workflow so it’s easier to use (e.g. part of `QaWorkflow`) * extract more from the log than just pass/fail * display the output in a more useful way (implement an artifact specialized view)
www.freexian.com
February 10, 2026 at 9:53 AM
Louis-Philippe Véronneau: Montreal Subway Foot Traffic Data, 2025 edition
Another year of data from _Société de Transport de Montréal_ , Montreal's transit agency! A few highlights this year: 1. Although the Saint-Michel station closed for emergency repairs in November 2024, traffic never bounced back to its pre-closure levels and is still stuck somewhere around 2022 Q2 levels. I wonder if this could be caused by the roadwork on Jean-Talon for the new Blue Line stations making it harder for folks in Montreal-Nord to reach the station by bus. 2. The effects of the opening of the Royalmount shopping center has had a durable impact on the traffic at the De la Savane station. I reported on this last year, but it seems this wasn't just a fad. 3. With the completion of the Deux-Montagnes branch of the Réseau express métropolitain (REM, a light-rail, above the surface transit network still in construction), the transfer stations to the Montreal subway have seen major traffic increases. The Édouard-Montpetit station has nearly reached its previous all-time record of 2015 and the McGill station has recovered from the general slump all the other stations have had in 2025. 4. The Assomption station, which used to have one of the lowest number of riders of the subway network, has had a tremendous growth in the past few years. This is mostly explained by the many high-rise projects that were built around the station since the end of the COVID-19 pandemic. 5. Although still affected by a very high seasonality, the Jean-Drapeau station broke its previous record of 2019, a testament of the continued attraction power of the various summer festivals taking place on the Sainte-Hélène et Notre-Dame islands. More generally, it seems the Montreal subway has had a pretty bad year. Traffic had been slowly climbing back since the COVID-19 pandemic, but this is the first year since 2020 such a sharp decline can be witnessed. Even major stations like Jean-Talon or Lionel-Groulx are on a downward trend and it is pretty worrisome. As for causes, a few things come to mind. First of all, as the number of Montrealers commuting to work by bike continues to rise1, a modal shift from public transit to active mobility is to be expected. As local experts put it, this is not uncommon and has been seen in other cities before. Another important factor that certainly turned people away from the subway this year has been the impacts of the continued housing crisis in Montreal. As more and more people get kicked out of their apartments, many have been seeking refuge in the subway stations to find shelter. Sadly, this also brought a unprecedented wave of incivilities. As riders' sense of security sharply decreased, the STM eventually resorted to banning unhoused people from sheltering in the subway. This decision did bring back some peace to the network, but one can posit damage had already been done and many casual riders are still avoiding the subway for this reason. Finally, the weekslong STM worker's strike in Q4 had an important impact on general traffic, as it severely reduced the opening hours of the subway. As for the previous item, once people find alternative ways to get around, it's always harder to bring them back. Hopefully, my 2026 report will be a more cheerful one... _By clicking on a subway station, you'll be redirected to a graph of the station's foot traffic._ * Orange line (top10) * Green line (top10) * Blue line * Yellow line * Global Top 10 ## Licences * The subway map displayed on this page, the original dataset and my modified dataset are licenced under CCO 1.0: they are in the public domain. * The R code I wrote is licensed under the GPLv3+. It has not changed in a few years. * * * 1. Mostly thanks to major improvements to the cycling network and the BIXI bikesharing program. ↩
veronneau.org
February 8, 2026 at 11:49 PM
Colin Watson: Free software activity in January 2026
About 80% of my Debian contributions this month were sponsored by Freexian, as well as one direct donation via GitHub Sponsors (thanks!). If you appreciate this sort of work and are at a company that uses Debian, have a look to see whether you can pay for any of Freexian‘s services; as well as the direct benefits, that revenue stream helps to keep Debian development sustainable for me and several other lovely people. You can also support my work directly via Liberapay or GitHub Sponsors. ## Python packaging New upstream versions: * django-macaddress (fixing use of `pkg_resources`) * fsspec (fixing a build failure with Python 3.14) * ipyparallel * pycodestyle * pyflakes (fixing a build failure with Python 3.14) * pyroma * pytest-golden (fixing a regression that broke markdown-callouts, which I reported upstream) * pytest-runner * python-auditwheel * python-b2sdk (fixing a build failure with Python 3.14) * python-certifi * python-django-imagekit (fixing a build failure with Python 3.14) * python-flake8 (fixing a build failure with a new pyflakes version) * python-ibm-cloud-sdk-core (contributed supporting fix upstream) * python-openapi-core (fixing a build failure with Python 3.14) * python-pdoc (fixing a build failure with Python 3.14) * python-pyfunceble * python-pytest-run-parallel * python-pytokens * python-weblogo (fixing use of `pkg_resources`) * python-wheezy.template * smart-open * sphinx-togglebutton * sqlobject * supervisor (fixing use of `pkg_resources`) * vcr.py (fixing a build failure with Python 3.14) * zope.interface (including a fix for a Python 3.14 failure in python-klein, which I contributed upstream) * zope.testrunner (fixing a build failure with Python 3.14) Fixes for Python 3.14: * pdfposter (contributed upstream) * pexpect * poetry * pyhamcrest * pylint-gitlab * python-astor * python-easydev (contributed upstream) * python-forbiddenfruit * python-ibm-cloud-sdk-core * python-iniparse * python-libusb1 * python-marshmallow-dataclass (contributed upstream) * python-marshmallow (NMU) * python-opentracing * python-opt-einsum-fx * python-spdx-tools * python-stopit * rich (NMU, also requiring an NMU of textual) * scikit-build-core * seqdiag * uncertainties * yarsync (contributed upstream, along with a supporting fix) Fixes for pytest 9: * pyee (contributed upstream) * python-django-celery-beat (contributed upstream) * python-overrides Porting away from the deprecated `pkg_resources`: * beaker * coreapi * cppy (no-change rebuild to remove a spurious dependency) * depthcharge-tools (contributed upstream) * errbot * gajim-antispam (removed unused dependency) * gajim-lengthnotifier (removed unused dependency) * gajim-openpgp (removed unused dependency) * gajim-pgp (removed unused dependency) * gajim-triggers (removed unused dependency) * grapefruit * impacket (contributed upstream) * jupyter-packaging (no-change rebuild to remove a spurious dependency) * khal * pipenv (no-change rebuild to remove a spurious dependency) * pyroma * pytest-runner * pytest-tornado * python-airr * python-aptly * python-docxcompose * python-hatch-mypyc (no-change rebuild to remove a spurious dependency) * python-pyfunceble * python-stopit * python-ttfautohint-py (removed unused dependency) * setuptools-scm (no-change rebuild to remove a spurious dependency) * slimit (removed unused dependency) * sphinx-togglebutton * topplot (contributed upstream) * valinor Other build/test failures: * audioop-lts: FTBFS: ValueError: major component is required * basemap: Tries to access Internet during build * celery: FTBFS: FAILED t/unit/backends/test_mongodb.py::test_MongoBackend::test_store_result (contributed upstream) * django-allauth: FTBFS: AttributeError: module ‘fido2.features’ has no attribute ‘webauthn_json_mapping’ * django-tastypie * m2crypto: FTBFS on armhf: AssertionError: 64 != 32 * magicgui * pytest-mypy-testing * python-asttokens * python-distutils-extra: FTBFS: dpkg-buildpackage: error: debian/rules binary subprocess failed with exit status 2 * python-django-extensions: FTBFS: FAILED tests/templatetags/test_highlighting.py::HighlightTagTests::test_should_highlight_python_syntax_with_name * python-gmpy2: FTBFS: ModuleNotFoundError: No module named ‘gmpy2’ * python-jpype: FTBFS on i386, armhf: test/jpypetest/test_buffer.py:394: TypeError (contributed upstream) * python-maturin: Upcoming target-lexicon update * traitlets (contributed upstream) * unattended-upgrades: FTBFS: F824 `global logged_msgs` is unused: name is never assigned in scope (NMU) I investigated several more build failures and suggested removing the packages in question: * aiozmq * mkdocstrings-python-legacy * python-djantic Other bugs: * magicgui: Directly Depends and Build-Depends on dbus * python3-netsnmpagent: Rebuild for libsnmp45 ## Other bits and pieces Alejandro Colomar reported that `man(1)` ignored the `MANWIDTH` environment variable in some circumstances. I investigated this and fixed it upstream. I contributed an ubuntu-dev-tools patch to stop recommending `sudo`. I added forky support to the images used in Salsa CI pipelines. I began working on getting a release candidate of groff 1.24.0 into experimental, though haven’t finished that yet. I worked on some lower-priority security updates for OpenSSH. ## Code reviews * netcfg: Support SSIDs with /, write correct wifi to /etc/network/interfaces (merged and uploaded) * openssh: [INTL:zh] Chinese debconf templates translations (merged) * pymongo (sponsored upload for Aryan Karamtoth) * python-streamz (sponsored upload for Aryan Karamtoth) * smart-open: Please make the build reproducible (fixed in a different way) * uvloop: FTBFS on riscv64 with Python 3.14 as supported (uploaded)
www.chiark.greenend.org.uk
February 8, 2026 at 9:52 PM
Vincent Bernat: Fragments of an adolescent web
I have unearthed a few old articles typed during my adolescence, between 1996 and 1998. Unremarkable at the time, these pages now compose, three decades later, the chronicle of a vanished era.1 The word “blog” does not exist yet. Wikipedia remains to come. Google has not been born. AltaVista reigns over searches, while already struggling to embrace the nascent immensity of the web2. To meet someone, you had to agree in advance and prepare your route on paper maps. 🗺️ The web is taking off. The CSS specification has just emerged, HTML tables still serve for page layout. Cookies and advertising banners are making their appearance. Pages are adorned with music and videos, forcing browsers to arm themselves with plugins. Netscape Navigator sits on 86% of the territory, but Windows 95 now bundles Internet Explorer to quickly catch up. Facing this offensive, Netscape opensource its browser. France falls behind. Outside universities, Internet access remains expensive and laborious. Minitel still reigns, offering phone directory, train tickets, remote shopping. This was not yet possible with the Internet: buying a CD online was a pipe dream. Encryption suffers from inappropriate regulation: the DES algorithm is capped at 40 bits and cracked in a few seconds. These pages bear the trace of the web’s adolescence. Thirty years have passed. The same battles continue: data selling, advertising, monopolies. * * * 1. Most articles linked here are not translated from French to English. ↩︎ 2. I recently noticed that Google no longer fully indexes my blog. For example, it is no longer possible to find the article on lanĉo. I assume this is a consequence of the explosion of AI-generated content or a change in priorities for Google. ↩︎
vincent.bernat.ch
February 8, 2026 at 7:48 PM
Dirk Eddelbuettel: chronometre: A new package (pair) demo for R and Python
Both R and Python make it reasonably easy to work with compiled extensions. But how to access objects in one environment from the other _and share state or (non-trivial) objects_ remains trickier. Recently (and while r-forge was ‘resting’ so we opened GitHub Discussions) a question was asked concerning R and Python object pointer exchange. This lead to a pretty decent discussion including arrow interchange demos (pretty ideal if dealing with data.frame-alike objects), but once the focus is on more ‘library-specific’ objects from a given (C or C++, say) library it is less clear what to do, or how involved it may get. R has external pointers, and these make it feasible to instantiate _the same object_ in Python. To demonstrate, I created a pair of (minimal) packages wrapping a lovely (small) class from the excellent spdlog library by Gabi Melman, and more specifically in an adapted-for-R version (to avoid some `R CMD check` nags) in my RcppSpdlog package. It is essentially a nicer/fancier C++ version of the `tic()` and `tic()` timing scheme. When an object is instantiated, it ‘starts the clock’ and when we accessing it later it prints the time elapsed in microsecond resolution. In Modern C++ this takes little more than keeping an internal `chrono` object. Which makes for a nice, small, yet specific object to pass to Python. So the R side of the package pair instantiates such an object, and accesses its address. For different reasons, sending a ‘raw’ pointer across does not work so well, but a string with the address printed works fabulously (and is a paradigm used around other packages so we did not invent this). Over on the Python side of the package pair, we then take this string representation and pass it to a little bit of pybind11 code to instantiate a new object. This can of course also expose functionality such as the ‘show time elapsed’ feature, either formatted or just numerically, of interest here. And that is all that there is! Now this can be done from R as well thanks to reticulate as the `demo()` (also shown on the package README.md) shows: > library(chronometre) > demo("chronometre", ask=FALSE) demo(chronometre) ---- ~~~~~~~~~~~ > #!/usr/bin/env r > > stopifnot("Demo requires 'reticulate'" = requireNamespace("reticulate", quietly=TRUE)) > stopifnot("Demo requires 'RcppSpdlog'" = requireNamespace("RcppSpdlog", quietly=TRUE)) > stopifnot("Demo requires 'xptr'" = requireNamespace("xptr", quietly=TRUE)) > library(reticulate) > ## reticulate and Python in general these days really want a venv so we will use one, > ## the default value is a location used locally; if needed create one > ## check for existing virtualenv to use, or else set one up > venvdir <- Sys.getenv("CHRONOMETRE_VENV", "/opt/venv/chronometre") > if (dir.exists(venvdir)) { + > use_virtualenv(venvdir, required = TRUE) + > } else { + > ## create a virtual environment, but make it temporary + > Sys.setenv(RETICULATE_VIRTUALENV_ROOT=tempdir()) + > virtualenv_create("r-reticulate-env") + > virtualenv_install("r-reticulate-env", packages = c("chronometre")) + > use_virtualenv("r-reticulate-env", required = TRUE) + > } > sw <- RcppSpdlog::get_stopwatch() # we use a C++ struct as example > Sys.sleep(0.5) # imagine doing some code here > print(sw) # stopwatch shows elapsed time 0.501220 > xptr::is_xptr(sw) # this is an external pointer in R [1] TRUE > xptr::xptr_address(sw) # get address, format is "0x...." [1] "0x58adb5918510" > sw2 <- xptr::new_xptr(xptr::xptr_address(sw)) # cloned (!!) but unclassed > attr(sw2, "class") <- c("stopwatch", "externalptr") # class it .. and then use it! > print(sw2) # `xptr` allows us close and use 0.501597 > sw3 <- ch$Stopwatch( xptr::xptr_address(sw) ) # new Python object via string ctor > print(sw3$elapsed()) # shows output via Python I/O datetime.timedelta(microseconds=502013) > cat(sw3$count(), "\n") # shows double 0.502657 > print(sw) # object still works in R 0.502721 > The same object, instantiated in R is used in Python and thereafter again in R. While _this_ object here is minimal in features, the concept of _passing a pointer_ is universal. We could use it for any interesting object that R can access and Python too can instantiate. Obviously, there be dragons as we pass pointers so one may want to ascertain that headers from corresponding compatible versions are used etc but _principle_ is unaffected and should just work. Both parts of this pair of packages are now at the corresponding repositories: PyPi and CRAN. As I commonly do here on package (change) announcements, I include the (minimal so far) set of high-level changes for the R package. > #### Changes in version 0.0.2 (2026-02-05) > > * Removed replaced unconditional virtualenv use in demo given preceding conditional block > > * Updated README.md with badges and an updated demo > > > > #### Changes in version 0.0.1 (2026-01-25) > > * Initial version and CRAN upload > Questions, suggestions, bug reports, … are welcome at either the (now awoken from the R-Forge slumber) Rcpp mailing list or the newer Rcpp Discussions. This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.
dirk.eddelbuettel.com
February 8, 2026 at 7:48 PM
Thorsten Alteholz
### **Debian LTS/ELTS** This was my hundred-thirty-ninth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian (as the LTS- and ELTS-teams have been merged now, there is only one paragraph left for both activities). During my allocated time I uploaded or worked on: * [DLA 4449-1] zvbi security update to fix five CVEs related to uninitialized pointers and integer overflows. * [DLA 4450-1] taglib security update to fix one CVE related to a segmentation violation. * [DLA 4451-1] shapelib security update to fix one CVE related to a double free. * [DLA 4454-1] libuev security update to fix one CVE related to a buffer overrun. * [ELA-1620-1] zvbi security update to fix five CVEs in Buster and Stretch related to uninitialized pointers and integer overflows. * [ELA-1621-1] taglib security update to fix one CVE in Buster and Stretch related to a segmentation violation. * [#1126167] bookworm-pu bug for zvbi to fix five CVEs in Bookworm. * [#1126273] bookworm-pu bug for taglib to fix one CVE in Bookworm. * [#1126370] bookworm-pu bug for libuev to fix one CVE in Bookworm. I also attended the monthly LTS/ELTS meeting. While working on updates, I stumbled upon packages, whose CVEs have been postponed for a long time and their CVSS score was rather high. I wonder whether one should pay more attention to postponed issues, otherwise one could have already marked them as _ignored_. ### **Debian Printing** Unfortunately I didn’t found any time to work on this topic. ### **Debian Lomiri** This month I worked on unifying packaging on Debian and Ubuntu. This makes it easier to work on those packages independent of the used platform. **This work is generously funded byFre(i)e Software GmbH!** ### **Debian Astro** This month I uploaded a new upstream version or a bugfix version of: * … supernovas to unstable (sponsored upload). * … libahp-xc to unstable. * … c-munipack to unstable. ### **Debian IoT** Unfortunately I didn’t found any time to work on this topic. ### **Debian Mobcom** Unfortunately I didn’t found any time to work on this topic. ### **misc** This month I uploaded a new upstream version or a bugfix version of: * … liburjtag to unstable. Unfortunately this month I was distracted from my normal Debian work by other unpleasant things, so that the paragraphs above are mostly empty. I now have to think about how many of my spare time I am able to dedicate to Debian in the future.
blog.alteholz.eu
February 8, 2026 at 1:47 PM
Reproducible Builds: Reproducible Builds in January 2026
<p class="lead"><strong>Welcome to the first monthly report in 2026 from the <a href="https://reproducible-builds.org">Reproducible Builds</a> project!</strong></p> <p><a href="https://reproducible-builds.org/"><img alt="" src="https://reproducible-builds.org/images/reports/2026-01/reproducible-builds.png#right" /></a></p> <p>These reports outline what we’ve been up to over the past month, highlighting items of news from elsewhere in the increasingly-important area of software supply-chain security. As ever, if you are interested in contributing to the Reproducible Builds project, please see the <a href="https://reproducible-builds.org/contribute/"><em>Contribute</em></a> page on our website.</p> <ol> <li><a href="https://reproducible-builds.org/blog/index.rss#flathub-now-testing-for-reproducibility">Flathub now testing for reproducibility</a></li> <li><a href="https://reproducible-builds.org/blog/index.rss#reproducibility-identifying-software-projects-that-will-fail-to-build-in-2038">Reproducibility identifying projects that will fail to build in 2038</a></li> <li><a href="https://reproducible-builds.org/blog/index.rss#distribution-work">Distribution work</a></li> <li><a href="https://reproducible-builds.org/blog/index.rss#tool-development">Tool development</a></li> <li><a href="https://reproducible-builds.org/blog/index.rss#two-new-academic-papers">Two new academic papers</a></li> <li><a href="https://reproducible-builds.org/blog/index.rss#upstream-patches">Upstream patches</a></li> </ol> <hr /> <h3 id="flathub-now-testing-for-reproducibility">Flathub now testing for reproducibility</h3> <p><a href="https://flathub.org/"><img alt="" src="https://reproducible-builds.org/images/reports/2026-01/flathub.png#right" /></a></p> <p><a href="https://flathub.org/">Flathub</a>, the primary repository/app store for <a href="https://flatpak.org/">Flatpak</a>-based applications, has begun checking for build reproducibility. <a href="https://docs.flathub.org/blog/vorarbeiter-2026">According to a recent blog post</a>:</p> <blockquote> <p>We have started testing binary reproducibility of <code class="language-plaintext highlighter-rouge">x86_64</code> builds targeting the stable repository. This is possible thanks to <a href="https://github.com/flathub-infra/flathub-repro-checker">flathub-repro-checker</a>, a tool doing the necessary legwork to recreate the build environment and compare the result of the rebuild with what is published on Flathub. While these tests have been running for a while now, we have recently restarted them from scratch after enabling S3 storage for diffoscope artifacts.</p> </blockquote> <p>The test results and status is available on their <a href="https://builds.flathub.org/reproducible">reproducible builds page</a>.</p> <p><br /></p> <h3 id="reproducibility-identifying-software-projects-that-will-fail-to-build-in-2038">Reproducibility identifying software projects that will fail to build in 2038</h3> <p>Longtime Reproducible Builds developer Bernhard M. Wiedemann <a href="https://www.reddit.com/r/linux/comments/1qfw17a/today_is_y2k38_commemoration_day_t12/">posted on Reddit on “Y2K38 commemoration day T-12”</a> — that is to say, twelve years to the day before the UNIX Epoch will no longer fit into a signed 32-bit integer variable on 19th January 2038.</p> <p>Bernhard’s comment succinctly outlines the problem as well as notes some of the potential remedies, as well as <a href="https://gcc.gnu.org/bugzilla/show_bug.cgi?id=118326">links to a discussion with the GCC developers</a> regarding “adding warnings for <code class="language-plaintext highlighter-rouge">int</code> → <code class="language-plaintext highlighter-rouge">time_t</code> conversions”.</p> <p>At the time of publication, Bernard’s topic had generated <a href="https://www.reddit.com/r/linux/comments/1qfw17a/today_is_y2k38_commemoration_day_t12/">50 comments in response</a>.</p> <p><br /></p> <h3 id="distribution-work">Distribution work</h3> <p><a href="https://conda-forge.org/"><img alt="" src="https://reproducible-builds.org/images/reports/2026-01/conda-forge.png#right" /></a></p> <p><a href="https://conda.org/"><strong>Conda</strong></a> is language-agnostic package manager which was originally developed to help Python data scientists and is now a popular package manager for Python and R.</p> <p><a href="https://conda-forge.org/"><em>conda-forge</em></a>, a community-led infrastructure for Conda recently revamped their <a href="https://prefix-dev.github.io/reproducible-builds/v1.html">dashboards to rebuild packages straight to track reproducibility</a>. There have been changes over the past two years to make the <em>conda-forge</em> build tooling fully reproducible by embedding the ‘lockfile’ of the entire build environment inside the packages.</p> <p><br /></p> <p><a href="https://debian.org/"><img alt="" src="https://reproducible-builds.org/images/reports/2026-01/debian.png#right" /></a></p> <p>In <strong>Debian</strong> this month:</p> <ul> <li> <p>Scott Talbert <a href="https://tracker.debian.org/news/1705702/accepted-dh-haskell-0613-source-into-unstable/">uploaded a new version of <code class="language-plaintext highlighter-rouge">dh-haskell</code></a> (0.6.13), reverting parallel support as it broke reproducibility, thereby fixing Debian bug <a href="https://bugs.debian.org/1125000">#1125000</a>.</p> </li> <li> <p>Vagrant Cascadian posted to <a href="https://lists.reproducible-builds.org/listinfo/rb-general/">our mailing list</a> on the topic of <a href="https://lists.reproducible-builds.org/pipermail/rb-general/2026-January/003987.html">“Duplicate Debian packages with matching name-version-arch problem”</a>. The issue is that <code class="language-plaintext highlighter-rouge">.buildinfo</code> files only “record the package name, version and architecture of the build-dependencies (and perhaps a bit more), but there are <a href="https://lists.debian.org/debian-snapshot/2025/10/msg00002.html">corner cases where multiple artifacts have the same name, version and architecture</a>”. This generated <a href="https://lists.reproducible-builds.org/pipermail/rb-general/2026-January/thread.html#3987">some discussion on the mailing list</a> as well as elsewhere in Debian.</p> </li> <li> <p>Roland Clobus also posted to our mailing list regarding <a href="https://lists.reproducible-builds.org/pipermail/rb-general/2026-January/003991.html"><em>Building Debian Live images from snapshot.debian.org</em></a>. This surfaced an issue regarding the timestamps of the <code class="language-plaintext highlighter-rouge">.deb</code> file, leading to Roland filing Debian bug <a href="https://bugs.debian.org/1126000">#1126000</a> to liaise with the developers of the <a href="https://snapshot.debian.org/"><em>snapshot.debian.org</em></a> service.</p> </li> <li> <p>A change was made to migrate away from using the results from <a href="https://tests.reproducible-builds.org"><em>tests.reproducible-builds.org</em></a> in deciding whether a package is a suitable candidate for the Debian <em>testing</em> distribution (the staging area for the next stable Debian release) to use the results from <a href="https://reproduce.debian.net/"><em>reproduce.debian.net</em></a> instead. This was, <a href="https://salsa.debian.org/release-team/britney2/-/merge_requests/115">according to Paul Gevers’ merge request</a>, because the former service “does so by building twice in a row with varying build environment. What we are actually interested in is if the binaries that we ship can be reproduced”. The information provided by <em>reproduce.debian.net</em> is currently being used to delay or speed up packages’ migration time based on their reproducibility status, but it has the potential, in the future, be used to block unreproducible packages from migrating entirely.</p> </li> <li> <p>41 reviews of Debian packages were added, 7 were updated and 37 were removed this month adding to <a href="https://tests.reproducible-builds.org/debian/index_issues.html">our knowledge about identified issues</a>. Chris Lamb identified and added a new <a href="https://salsa.debian.org/reproducible-builds/reproducible-notes/commit/5051be53"><code class="language-plaintext highlighter-rouge">source_date_epoch_affected_by_timezone_by_d_compiler_gdc</code></a> issue type, as well as <a href="https://salsa.debian.org/reproducible-builds/reproducible-notes/commit/2990c69b"><code class="language-plaintext highlighter-rouge">timezone_variant_in_argparse_manpage</code></a>.</p> </li> </ul> <p><br /></p> <p><a href="https://reproducible.nixos.org/"><img alt="" src="https://reproducible-builds.org/images/reports/2026-01/nixos.png#right" /></a></p> <p>In <strong>NixOS</strong> this month, it was <a href="https://todon.nl/@nzbr@chaos.social/115973847890479025">announced</a> that the <a href="https://guix.gnu.org/blog/2023/the-full-source-bootstrap-building-from-source-all-the-way-down/">GNU Guix Full Source Bootstrap</a> was <a href="https://github.com/nzbr/nixos-full-source-bootstrap">ported to NixOS</a> as part of <a href="https://chaos.social/@nzbr/115973847897716839">Wire Jansen bachelor’s thesis</a> (<a href="https://nzbr.github.io/nixos-full-source-bootstrap/thesis.pdf">PDF</a>). At the time of publication, this <a href="https://github.com/NixOS/nixpkgs/pull/479322">change has landed</a> in NiX’ <a href="https://nixos.org/guides/nix-pills/19-fundamentals-of-stdenv.html"><code class="language-plaintext highlighter-rouge">stdev</code></a> distribution.</p> <p><br /></p> <p><a href="https://www.opensuse.org/"><img alt="" src="https://reproducible-builds.org/images/reports/2026-01/opensuse.png#right" /></a></p> <p>Lastly, Bernhard M. Wiedemann posted another <a href="https://www.opensuse.org/"><strong>openSUSE</strong></a> <a href="https://lists.opensuse.org/archives/list/factory@lists.opensuse.org/thread/WGWBPINHEGH4MBKRJHFQJGEX6OZ7VWDU/">monthly update</a> for his work there.</p> <p><br /></p> <h3 id="tool-development">Tool development</h3> <p><a href="https://diffoscope.org/"><img alt="" src="https://reproducible-builds.org/images/reports/2026-01/diffoscope.png#right" /></a></p> <p><a href="https://diffoscope.org"><strong>diffoscope</strong></a> is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions, <a href="https://tracker.debian.org/news/1706143/accepted-diffoscope-310-source-into-unstable/"><code class="language-plaintext highlighter-rouge">310</code></a> and <a href="https://tracker.debian.org/news/1709611/accepted-diffoscope-311-source-into-unstable/"><code class="language-plaintext highlighter-rouge">311</code></a> to Debian.</p> <ul> <li>Fix test compatibility with <em>u-boot-tools</em> version <code class="language-plaintext highlighter-rouge">2026-01</code>. [<a href="https://salsa.debian.org/reproducible-builds/diffoscope/commit/b56d1180">…</a>]</li> <li>Drop the implied <code class="language-plaintext highlighter-rouge">Rules-Requires-Root: no</code> entry in <code class="language-plaintext highlighter-rouge">debian/control</code>. [<a href="https://salsa.debian.org/reproducible-builds/diffoscope/commit/eaa8c6d7">…</a>]</li> <li>Bump <code class="language-plaintext highlighter-rouge">Standards-Version</code> to 4.7.3. [<a href="https://salsa.debian.org/reproducible-builds/diffoscope/commit/194731e3">…</a>]</li> <li>Reference the Debian <code class="language-plaintext highlighter-rouge">ocaml</code> package instead of <code class="language-plaintext highlighter-rouge">ocaml-nox</code>. (<a href="https://bugs.debian.org/1125094">#1125094</a>)</li> <li>Apply a patch by Jelle van der Waa to adjust a test fixture match new lines. [<a href="https://salsa.debian.org/jelle/diffoscope/commit/e4ec97f7861ffce491b19af6d61aefe003df6c6d">…</a>]</li> <li>Also the drop implied <code class="language-plaintext highlighter-rouge">Priority: optional</code> from <code class="language-plaintext highlighter-rouge">debian/control</code>. [<a href="https://salsa.debian.org/reproducible-builds/diffoscope/commit/b40346a3">…</a>]</li> </ul> <p><br /></p> <p>In addition, Holger Levsen uploaded two versions of <strong>disorderfs</strong>, first updating the package from FUSE 2 to <a href="https://salsa.debian.org/reproducible-builds/disorderfs/-/merge_requests/8">FUSE 3</a> as described in <a href="https://reproducible-builds.org/reports/2025-12/">last months report</a>, as well as updating the packaging to the latest Debian standards. A <a href="https://tracker.debian.org/news/1703912/accepted-disorderfs-062-1-source-into-unstable/">second upload</a> (<code class="language-plaintext highlighter-rouge">0.6.2-1</code>) was subsequently made, with Holger adding instructions on how to add the upstream release to our release archive and incorporating changes by Roland Clobus to set <code class="language-plaintext highlighter-rouge">_FILE_OFFSET_BITS</code> on 32-bit platforms, fixing a build failure on 32-bit systems. Vagrant Cascadian updated <em>diffoscope</em> in GNU Guix to version <a href="https://codeberg.org/guix/guix/commit/1718f0349536a83a76d0b4c6760d16ab147e3694"><code class="language-plaintext highlighter-rouge">311-2-ge4ec97f7</code></a> and <em>disorderfs</em> to <a href="https://codeberg.org/guix/guix/commit/60a507264d8b0d3b49ea802e5089449109028da4"><code class="language-plaintext highlighter-rouge">0.6.2</code></a>.</p> <p><br /></p> <h3 id="two-new-academic-papers">Two new academic papers</h3> <p><a href="https://arxiv.org/abs/2601.12811"><img alt="" src="https://reproducible-builds.org/images/reports/2026-01/2601.png#right" /></a></p> <p>Julien Malka, Stefano Zacchiroli and Théo Zimmermann of Télécom Paris’ in-house research laboratory, the <a href="https://www.telecom-paris.fr/en/research/labs/information-processing-ltci">Information Processing and Communications Laboratory</a> (LTCI) published a paper this month titled <a href="https://arxiv.org/abs/2601.12811"><em>Docker Does Not Guarantee Reproducibility</em></a>:</p> <blockquote> <p>[…] While <a href="https://www.docker.com/">Docker</a> is frequently cited in the literature as a tool that enables reproducibility in theory, the extent of its guarantees and limitations in practice remains under-explored. In this work, we address this gap through two complementary approaches. First, we conduct a systematic literature review to examine how Docker is framed in scientific discourse on reproducibility and to identify documented best practices for writing <code class="language-plaintext highlighter-rouge">Dockerfile</code>s enabling reproducible image building. Then, we perform a large-scale empirical study of 5,298 Docker builds collected from GitHub workflows. By rebuilding these images and comparing the results with their historical counterparts, we assess the real reproducibility of Docker images and evaluate the effectiveness of the best practices identified in the literature.</p> </blockquote> <p>A <a href="https://arxiv.org/pdf/2601.12811">PDF</a> of their paper is available online.</p> <p><br /></p> <p><a href="https://dl.acm.org/doi/10.1145/3736731.3746146"><img alt="" src="https://reproducible-builds.org/images/reports/2026-01/3736731.3746146.png#right" /></a></p> <p>Quentin Guilloteau, Antoine Waehren and Florina M. Ciorba of the <a href="https://www.unibas.ch/en.html">University of Basel</a> in Sweden <strong>also</strong> published a <a href="https://docker.com/"><em>Docker</em></a>-related paper, theirs called <a href="https://dl.acm.org/doi/10.1145/3736731.3746146"><em>Longitudinal Study of the Software Environments Produced by Dockerfiles from Research Artifacts</em></a>:</p> <blockquote> <p>The reproducibility crisis has affected all scientific disciplines, including computer science (CS). To address this issue, the CS community has established artifact evaluation processes at conferences and in journals to evaluate the reproducibility of the results shared in publications. Authors are therefore required to share their artifacts with reviewers, including code, data, and the software environment necessary to reproduce the results. One method for sharing the software environment proposed by conferences and journals is to utilize container technologies such as Docker and Apptainer. However, these tools rely on non-reproducible tools, resulting in non-reproducible containers. In this paper, we present a tool and methodology to evaluate variations over time in software environments of container images derived from research artifacts. We also present initial results on a small set of <code class="language-plaintext highlighter-rouge">Dockerfiles</code> from the Euro-Par 2024 conference.</p> </blockquote> <p>A <a href="https://dl.acm.org/doi/epdf/10.1145/3736731.3746146">PDF</a> of their paper is available online.</p> <p><br /></p> <h2 id="miscellaneous-news">Miscellaneous news</h2> <p>On <a href="https://lists.reproducible-builds.org/listinfo/rb-general/">our mailing list</a> this month:</p> <ul> <li> <p><a href="https://lists.reproducible-builds.org/pipermail/rb-general/2026-January/003995.html"><em>kpcyrd</em> started a thread</a> after they noticed that “SWHID (also known as ISO/IEC 18670:2025) was published 1.0 in 2022 and ISO standardized in 2025, but uses the insecure <a href="https://www.swhid.org/specification/v1.2/5.Core_identifiers/">SHA-1 as core cryptographic primitive</a>”, asking whether there have been any attempts to upgrade this to SHA-256 or similar.</p> </li> <li> <p>Jan-Benedict Glaw asked about the <a href="https://lists.reproducible-builds.org/pipermail/rb-general/2026-January/004005.html"><em>Reproducibility for Libreoffice [when performing] ODT to PDF conversion</em></a> after they observed that “simply calling <code class="language-plaintext highlighter-rouge">libreoffice --convert-to pdf some.odt</code> results in unreproducible output PDF. After <a href="https://lists.reproducible-builds.org/pipermail/rb-general/2026-January/thread.html#4008">some replies</a>, Jan-Benedict wrote back to observe that it may be an issue with both timestamps and embedded fonts.</p> </li> </ul> <p>Lastly, <em>kpcyrd</em> added a <a href="https://rust-lang.org/">Rust</a> section to the <a href="https://reproducible-builds.org/docs/stable-outputs/"><em>Stable order for outputs</em></a> page on our website. [<a href="https://salsa.debian.org/reproducible-builds/reproducible-website/commit/07558472">…</a>]</p> <p><br /></p> <h3 id="upstream-patches">Upstream patches</h3> <p>The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:</p> <ul> <li> <p>Bernhard M. Wiedemann:</p> <ul> <li><a href="https://github.com/Cisco-Talos/clamav/issues/1663"><code class="language-plaintext highlighter-rouge">clamav</code></a></li> <li><a href="https://build.opensuse.org/request/show/1327621"><code class="language-plaintext highlighter-rouge">kf6-kuserfeedback</code></a></li> <li><a href="https://aomedia-review.googlesource.com/c/aom/+/206321"><code class="language-plaintext highlighter-rouge">libaom</code></a></li> <li><a href="https://github.com/nim-lang/Nim/issues/25442"><code class="language-plaintext highlighter-rouge">Nim</code></a></li> <li><a href="https://github.com/erlang/otp/pull/10556"><code class="language-plaintext highlighter-rouge">otp</code></a></li> <li><a href="https://gitlab.com/adhami3310/Switcheroo/-/commit/d85c2180f7545c5e0155ac412b763d027f95b549"><code class="language-plaintext highlighter-rouge">Switcheroo</code></a> (by Khaleel Al-Adhami)</li> <li><a href="https://build.opensuse.org/request/show/1329461"><code class="language-plaintext highlighter-rouge">uwsm</code></a></li> <li><a href="https://github.com/zopefoundation/ZEO/issues/245"><code class="language-plaintext highlighter-rouge">ZEO</code></a></li> </ul> </li> <li> <p>Chris Lamb:</p> <ul> <li><a href="https://bugs.debian.org/1124697">#1124697</a> filed against <a href="https://tracker.debian.org/pkg/sqlalchemy-i18n"><code class="language-plaintext highlighter-rouge">sqlalchemy-i18n</code></a>.</li> <li><a href="https://bugs.debian.org/1125671">#1125671</a> filed against <a href="https://tracker.debian.org/pkg/tea-cli"><code class="language-plaintext highlighter-rouge">tea-cli</code></a>.</li> <li><a href="https://bugs.debian.org/1125725">#1125725</a> filed against <a href="https://tracker.debian.org/pkg/libimage-librsvg-perl"><code class="language-plaintext highlighter-rouge">libimage-librsvg-perl</code></a>.</li> <li><a href="https://bugs.debian.org/1125727">#1125727</a> filed against <a href="https://tracker.debian.org/pkg/seer"><code class="language-plaintext highlighter-rouge">seer</code></a>.</li> <li><a href="https://bugs.debian.org/1125729">#1125729</a> filed against <a href="https://tracker.debian.org/pkg/grabix"><code class="language-plaintext highlighter-rouge">grabix</code></a>.</li> <li><a href="https://bugs.debian.org/1126038">#1126038</a> filed against <a href="https://tracker.debian.org/pkg/hovercraft"><code class="language-plaintext highlighter-rouge">hovercraft</code></a>.</li> <li><a href="https://bugs.debian.org/1126039">#1126039</a> filed against <a href="https://tracker.debian.org/pkg/lomiri-location-service"><code class="language-plaintext highlighter-rouge">lomiri-location-service</code></a>.</li> <li><a href="https://bugs.debian.org/1126092">#1126092</a> filed against <a href="https://tracker.debian.org/pkg/argparse-manpage"><code class="language-plaintext highlighter-rouge">argparse-manpage</code></a>.</li> <li><a href="https://bugs.debian.org/1126454">#1126454</a> filed against <a href="https://tracker.debian.org/pkg/xarray-safe-rcm"><code class="language-plaintext highlighter-rouge">xarray-safe-rcm</code></a>.</li> <li><a href="https://bugs.debian.org/1126512">#1126512</a> filed against <a href="https://tracker.debian.org/pkg/gcc-15"><code class="language-plaintext highlighter-rouge">gcc-15</code></a> (<a href="https://github.com/dlang/dmd/issues/22463">forwarded upstream</a>).</li> </ul> </li> <li> <p>Jochen Sprickerhof:</p> <ul> <li><a href="https://bugs.debian.org/1124951">#1124951</a> filed against <a href="https://tracker.debian.org/pkg/rsyslog"><code class="language-plaintext highlighter-rouge">rsyslog</code></a>.</li> <li><a href="https://bugs.debian.org/1125000">#1125000</a> filed against <a href="https://tracker.debian.org/pkg/dh-haskell"><code class="language-plaintext highlighter-rouge">dh-haskell</code></a>.</li> </ul> </li> </ul> <p><br /> <br /></p> <p>Finally, if you are interested in contributing to the Reproducible Builds project, please visit our <a href="https://reproducible-builds.org/contribute/"><em>Contribute</em></a> page on our website. However, you can get in touch with us via:</p> <ul> <li> <p>IRC: <code class="language-plaintext highlighter-rouge">#reproducible-builds</code> on <code class="language-plaintext highlighter-rouge">irc.oftc.net</code>.</p> </li> <li> <p>Mastodon: <a href="https://fosstodon.org/@reproducible_builds">@reproducible_builds@fosstodon.org</a></p> </li> <li> <p>Mailing list: <a href="https://lists.reproducible-builds.org/listinfo/rb-general"><code class="language-plaintext highlighter-rouge">rb-general@lists.reproducible-builds.org</code></a></p> </li> </ul>
reproducible-builds.org
February 7, 2026 at 1:16 AM
Birger Schacht: Status update, January 2026
<p>January was a slow month, I only did three uploads to Debian unstable:</p> <ul> <li>xdg-desktop-portal-wlr updated to 0.8.1-1</li> <li>swayimg updated to 4.7-1</li> <li>usbguard updated to 1.1.4+ds-2, which closed <a href="http://bugs.debian.org/1122733">#1122733</a></li> </ul> <p>I was very happy to see the new <a href="https://dfsg-new-queue.debian.org/">dfsg-new-queue</a> and that there are more hands now processing the NEW queue. I also finally got one of the packages accepted that I uploaded after the Trixie release: <a href="https://tracker.debian.org/pkg/wayback">wayback</a> which I uploaded last August. There has been another release since then, I’ll try to upload that in the next few days.</p> <p>There was a <a href="https://github.com/b1rger/carl/issues/188">bug report for <code>carl</code></a> asking for Windows support. <code>carl</code> used the <a href="https://crates.io/crates/xdg">xdg</a> create for looking up the XDG directories, but <code>xdg</code> does not support windows systems (and it seems this <a href="https://github.com/whitequark/rust-xdg/issues/28">will not change</a>) The reporter also provided a PR to replace the dependency with the <a href="https://crates.io/crates/directories">directories</a> crate which more system agnostic. I adapted the PR a bit and merged it and <a href="https://github.com/b1rger/carl/releases/tag/v0.6.0">released version 0.6.0</a> of carl.</p> <p>At my dayjob I refactored <a href="https://github.com/acdh-oeaw/django-grouper/">django-grouper</a>. <code>django-grouper</code> is a package we use to find duplicate objects in our data. Our users often work with datasets of thousands of historical persons, places and institutions and in projects that run over years and ingest data from multiple sources, it happens that entries are created several times. I wrote the initial app in 2024, but was never really happy about the approach I used back then. It was based on <a href="https://medium.com/data-science/group-thousands-of-similar-spreadsheet-text-cells-in-seconds-2493b3ce6d8d">this blog post</a> that describes how to group spreadsheet text cells. It uses <a href="https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html">sklearns TfidfVectorizer</a> with a custom analyzer and the library <a href="https://github.com/ing-bank/sparse_dot_topn">sparse_dot_topn</a> for creating the matrix. All in all the module to calculate the clusters was 80 lines and with <code>sparse_dot_topn</code> it pulled in a rather niche Python library. I was pretty sure that this functionality could also be implemented with basic sklearn functionality and it was: we are now using <a href="https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.DictVectorizer.html">DictVectorizer</a> because in a Django app we are working with objects that can be mapped to dicts anyway. And for clustering the data, the app now uses the <a href="https://scikit-learn.org/stable/modules/generated/sklearn.cluster.DBSCAN.html">DBSCAN</a> algorithm (with the manhattan distance as metric). The module is now only half the size and the whole app lost one dependency! I released those changes as <a href="https://github.com/acdh-oeaw/django-grouper/releases/tag/v0.3.0">version 0.3.0</a> of the app.</p> <p>At the end of January together with friends I went to Brussels to attend <a href="https://fosdem.org/">FOSDEM</a>. We took the night train but there were a couple of broken down trains so the ride took 26 hours instead of one night. It is a good thing we had a one day buffer and FOSDEM only started on Saturday. As usual there were too many talks to visit, so I’ll have to watch some of the recordings in the next few weeks.</p> <p>Some examples of talks I found interesting so far:</p> <ul> <li>a talk about supporting <a href="https://fosdem.org/2026/schedule/event/RCFALN-rust-building-performance-critical-python-apps/">Python web deployments with Rust</a> in the Rust Developer room</li> <li>a <a href="https://fosdem.org/2026/schedule/event/S7RELZ-ducks_to_the_rescue_-_etl_using_python_and_duckdb/">talk about duckdb</a> in the Python Developer room</li> <li>an introduction to <a href="https://fosdem.org/2026/schedule/event/DVVAV9-particle-os-from-trad-distro-to-immutable-image/">particleos</a> in the Distributions Developer room</li> </ul>
bisco.org
February 7, 2026 at 12:46 AM
Dirk Eddelbuettel: rfoaas 2.3.3: Limited Rebirth
<p><img alt="rfoaas greed example" src="https://dirk.eddelbuettel.com/blog/code/rfoaas/rfoaas_2018-08.png" style="float: left; margin: 10px 10px 10px 0;" width="506" /></p> <p>The original <a href="https://www.foaas.com">FOAAS</a> site provided a rather wide variety of REST access points, but it sadky is no more (while the <a href="https://github.com/tomdionysus/foaas">old repo</a> is still there). A newer replacement site <a href="https://foass.1001010.com/">FOASS</a> is up and running, but with a somewhat reduced offering. (For example, the two accessors shown in the screenshot are no more. <em>C’est la vie.</em>)</p> <p>Recognising that perfect may once again be the enemy of (somewhat) good (enough), we have rejigged the <a href="https://dirk.eddelbuettel.com/code/rfoaas.html">rfoaas</a> package in a new release 2.3.3. (The precding version number 2.3.2 corresponded to the upstream version, indicating which API release we matched. Now we just went ‘+ 0.0.1’ but there is no longer a correspondence to the service version at <a href="https://foass.1001010.com/">FOASS</a>.)</p> <p>Accessor functions for each of the now available access points are provided, ans the random sampling accessor <code>getRandomFO()</code> now picks from that set.</p> <p>My <a href="https://dirk.eddelbuettel.com/cranberries/">CRANberries</a> service provides a comparison to <a href="https://dirk.eddelbuettel.com/cranberries/2026/02/04/#rfoaas_2.3.3">the previous release</a>. Questions, comments etc should go to the <a href="https://github.com/eddelbuettel/rfoaas/issues">GitHub issue tracker</a>. More background information is on the <a href="https://dirk.eddelbuettel.com/code/rfoaas.html">project page</a> as well as on the <a href="https://github.com/eddelbuettel/rfoaas">github repo</a></p> <p style="font-size: 80%; font-style: italic;"> This post by <a href="https://dirk.eddelbuettel.com">Dirk Eddelbuettel</a> originated on his <a href="https://dirk.eddelbuettel.com/blog/">Thinking inside the box</a> blog. If you like this or other open-source work I do, you can <a href="https://github.com/sponsors/eddelbuettel">sponsor me at GitHub</a>. </p><p></p>
dirk.eddelbuettel.com
February 5, 2026 at 3:41 AM