nirik :fedora: :redhat:
banner
nirik.fosstodon.org.ap.brid.gy
nirik :fedora: :redhat:
@nirik.fosstodon.org.ap.brid.gy
Sysadmin shouting at clouds. #fedora #redhat

[bridged from https://fosstodon.org/@nirik on the fediverse by https://fed.brid.gy/ ]
Weekly recap of things for me in #fedora

This week it's a weird bug with httpd reloading and then some sad tales of scrapers

https://www.scrye.com/blogs/nirik/posts/2026/02/14/misc-fedora-bits-2nd-week-of-feb-2026/
misc fedora bits 2nd week of feb 2026
Another weekly recap of happenings around fedora for me. ## Strange long httpd reload times on proxy11 I spent a fair bit of time looking at one of our proxies. We have them all to a reload (aka 'graceful restart') every hour when we update a ticketkey on them. For the vast majority of them, thats fine and works as expected. However, proxy11 decided to start taking a while (like 12-15seconds) to reload, causing our monitoring to alert that it was down... then back up. In the end, it seemed the problem was somehow related to some old tls certificates that were present, but not used anywhere. All I can think of is that it's doing some kind of parsing of all certs and somehow those old ones cause it undue processing time. I removed those old certs and reload times went way back down again. I'm tempted to try and figure out what it's doing exactly here, but I already spent a fair bit of time on it and it's working again now, so I guess I will just shrug and move on. ## Anubis and download servers A while back I had to hurredly deploy anubis in front of our download servers. This was due to the scrapers deciding to just download every rpm / iso from every fedora release since the dawn of time at a massive concurrency. This was saturating one of our 10G links completely, and making another somewhat full. So, I deployed anubis and it dropped things back to 'normal' again. Fast forward to this last week, and my rush in deploying anubis came back to bite me. We have a cloudfront distribution that uses our download servers as it's 'origin'. Then we point all aws network blocks to use that for any fedora instances in aws. This is a win for us as then everything for them is cached on the aws side saving bandwith, and a win for aws users as that traffic is 'local' to them so faster and doesn't cause them to need to be billed for ingress either. Last week, anubis started blocking CloudFront, so uses in aws would get a anubis challenge page instead of the actual content they were expecting. But why did it this just happen now? well, as near as I could determine, someone/scrapers were hitting the CloudFront endpoints and crawling our download server (fine, no problem there), but then they hit a directory that they handled poorly. The directory was used/last updated about 11 years ago with a readme file explaining that the content was moved and no longer there. Great. However, also it had previous subdirectories as links to '.' (ie, the current directory). Since scrapers don't use any of the 20 years of crawling code, and instead just brute force things, this resulted in a bunch of requests like: GET /foo/ GET /foo/foo/ GET /foo/foo/foo/ and so on. These are all really small (just a directory listing), so that meant it could make requests really really fast. So, after some point anubis started challenging those CloudFront connections and boom. So, the problem with the hurred deployment I had made there was that The policy file I had deployed was not actually being used. I had allowed CloudFront, but it didn't seem to help any, and it took me far too long to figure out that anubis was starting up, printing one error about not being able to read the policy file and just running with the default configuration. ;( It turned out be a podman/selinux interaction and is now fixed. I also removed those . links and set that directory tree to just 403 all requests to it. ## Anubis and forge Also this week, folks were reporting problems with our new forgejo forge. Anubis was doing challenges when people were trying to submit comments and it was messing them up. In the end here, I just needed to adjust the config to allow POSTs through. At least right now scrapers aren't doing any POSTS and just allowing those seems to fix the issues people were having. ## Some more scrapers Friday we had them hitting release-monitoring.org. This time it was what I am calling a 'type 0' scraper. It was all coming from one cloud ip and I could just block them. This morning a bit ago, we had a group hit/find the 'search' button on koji.fedoraproject.org, taking it offline. I was able to block the endpoint for a few hours and they went away, but no telling if they will be back. These were the 'type 2' kind (botnet using users ip's/browsers from 100's of thousands of different ips). I am sad that the end game here sounds like there's not going to be so much of a open internet anymore. ie, for self defense sites will all have to go to requiring registration of some kind before working. I can only hope business models change before it comes to that. ## comments? additions? reactions? As always, comment on mastodon:
www.scrye.com
February 14, 2026 at 6:48 PM
Nice relaxing saturday morning... and oh no, scrapers found the 'search' box on koji. ;(

The end of this timeline is a no longer open internet, which I am sure they don't care about, but I do.
February 14, 2026 at 5:38 PM
Sadly, I guess I should come up with a begg bounty reply template.
February 13, 2026 at 1:58 AM
Yesterday and today in my #fedora land:
* More digging into anubis problems on download servers. Finally I think everything is solved. There were 2 issues from when I hurriedly deployed it there to prevent doom: First they were not sharing a private key, so challenges were not synced between […]
Original post on fosstodon.org
fosstodon.org
February 13, 2026 at 1:03 AM
Today in my #fedora land:
* Problems with anubis and cloudfront. It seems like it starts getting challenges after a few hours even though it should be allowed. watching it and working for now.
* meeting o rama (including 2hour fesco meeting...wheeee)
* Got storinator01 back online mostly. Only […]
Original post on fosstodon.org
fosstodon.org
February 11, 2026 at 4:55 AM
Today in my #fedora land:
* Infra and Releng sprint planning meeting - 2026-02-09 7am
* Fedora Release Engineering meeting - 2026-02-09 8am
* Looked at a pr creation issue on src ( https://pagure.io/pagure/issue/5544 )
* Spent a lot of time finding why a proxy was taking >12 seconds on reload […]
Original post on fosstodon.org
fosstodon.org
February 10, 2026 at 5:05 AM
well, my gauntlet machine still mostly works! The monitor has a lot of burn in, the red gun is not doing it's job and it froze up on me a few times, but overall still working.

I need to look for the keys.

Probibly going to see if I can sell it to someone […]

[Original post on fosstodon.org]
February 9, 2026 at 12:19 AM
Cinder asks: Have you petted a cat for #caturday ?
February 7, 2026 at 7:03 PM
Saturday weekly recap. It's a shorter one today with just lots of talk about the recent branching off of #fedora 44 from rawhide. Enjoy.

https://www.scrye.com/blogs/nirik/posts/2026/02/07/misc-fedora-bits-1st-week-of-feb-2026/
misc fedora bits 1st week of feb 2026
Welcome to a bit of recap of the first week of February. It will be a shorter one today... ## Fedora 44 Branching The big news this week was the Fedora 44 branching off rawhide. This is by far the most complicated part of the release. There's updates that have to happen in a ton of places all in the right order and with the right content. Things didn't start when they were supposed to (tuesday morning), because we had some last minute mass rebuilds (golang and ghc). Then, they didn't start wed morning because we were trying to get the gnome 50 update to pass gating. Finally on thursday we just ended up unpushing that update and starting the process. This time the releng side was run by Patrik. It's the first time he's done this process, but he did a great job! He asked questions at each step and we were able to clarify and reorder the documetation so I hope things will be even more clear and easy next cycle. You can see the current SOP on it (before changes from this cycle): https://docs.fedoraproject.org/en-US/infra/release_guide/sop_mass_branching/ Look at all those steps! This was also a bit of a long week because I am in PST and patrik is in CET, so I had to get up early and he had to stay late. Timezones are anoying. :) Anyhow, I think things went quite smoothly. We got rawhide and branched composes right away, and only a few minor items to clean up and figure out how to do better. ## Sprint planning meeting again monday We had our last sprint planning meeting almost two weeks ago, so on monday it's time for another one. We did manage to run it in matrix, and although we did run over time I think it went not too badly. I'll probibly do some prep work on things this weekend for it. But if anyone wants to join in/read back it will be in #meeeting-3:fedoraproject.org at 15UTC on matrix. ## comments? additions? reactions? As always, comment on mastodon:
www.scrye.com
February 7, 2026 at 6:49 PM
Yesterday in #fedora land:
* Trying to get branching started, too many things in flight.
* Got all the rawhide and eln packages resigned with f45 key.
* Helped update a bunch of docs for branching.
* sent out email on the branching delay.
Today
* finally branching started. Helped Patrik do all […]
Original post on fosstodon.org
fosstodon.org
February 6, 2026 at 2:26 AM
And yesterday in #fedora land for me:
* Branching didn't happen due to a bunch of things still landing. ;( Tried to move them along with mixed success.
* Got a GPU server we got online in the datacenter, but needs some drac licensing sorted before I can get to the actual machine/install.
* Fixed […]
Original post on fosstodon.org
fosstodon.org
February 5, 2026 at 1:38 AM
Got a bit behind... monday in #fedora land:
* Some meetings
* Got 2 replacement 10G cards for some copr hypervisors, got them installed and machines installed/online
* Reviewed a zillion PR's for branching tomorrow (except it wasn't actually tomorrow)
* Gathered a bunch more info on ipv6 issues […]
Original post on fosstodon.org
fosstodon.org
February 5, 2026 at 1:34 AM
Pretty cool you can scan pet microchips with the flipper zero... as long as you can keep the cat from biting it.

His temp was normal. 😸
February 3, 2026 at 5:16 AM
Another saturday, another #fedora week recap...

This time: Some datacenter move cleanup, mass update/reboots with firmware, and a preview of some homeassistant posts I hope to do soon.

https://www.scrye.com/blogs/nirik/posts/2026/01/31/misc-fedora-bits-for-end-of-jan-2026/
misc fedora bits for end of jan 2026
Another busy week for me. There's been less new work coming in, so it's been a great chance to catch up on backlog and get things done. ## rdu2cc to rdu3 datacenter move cleanup In december, just before the holidays almost all of our hardware from the old rdu2 community cage was moved to our new rdu3 datacenter. We got everything that was end user visible moved and working before the break, but that still left a number of things to clean up and fully bring back up. So, this last week I tried to focus on that. * There were 2 copr builder hypervisors that were moved fine, but their 10GB network cards just didn't work. We tried all kinds of things, but in the end just asked for replacement ones. Those quickly arrived this week and were installed. One of them just worked fine, the other one I had to tweak with settings, but finally got it working too, so both of those are back online and reinstalled with RHEL10. * We had a bunch of problems getting into the storinator device that was moved, and in the end the reason why was simple: It was not our storinator at all, but a centos one that was decomissioned. They are moving the right one in a few weeks. * There were a few firewall rules to get updated and ansible config to get things all green in that new vlan. That should be all in place now. * There is still one puzzling ipv6 routing issue for the copr power9's. Still trying to figure that out. https://forge.fedoraproject.org/infra/tickets/issues/13085 ## mass update/reboot cycle This week we also did a mass update/reboot cycle over all our machines. Due to the holidays and various scheduling stuff we hadn't done one for almost 2 months, so it was overdue. There were a number of minor issues, many of which we knew about and a few we didn't: * On RHEL10 hosts, you have to update redhat-release first then the rest of the updates, because the post quantium crypto on new packages needs the keys in redhat-release. ;( * docker-distribution 3.0.0 is really really slow in our infra, and also switches to using a unpriv user instead of root. We downgraded back for now. * anubis didn't start right on our download servers. Fixed that. * A few things that got 'stuck' trying to listen to amqp messages when the rabbitmq cluster was rebooting. This time also we applied all the pending firmware updates to all the x86 servers at least. That caused reboots to take ~20min or so on those servers as they applied, causing the outage to be longer and more disruptive than we would like, but it's nice to be fully up to date on firmware again. Overall it went pretty smoothly. Thanks to James Anthill for planning and running most all the updates. ## Some homeassistant fun I'm a bit behind on posting some reviews of new devices added to my home assistant setup and will try and write those up soon, but as a preview: * I got a https://shop.hydrificwater.com/pages/buy-droplet installed in our pumphouse. Pretty nice to see exact flow/usage of all our house water. There's some anoyances tho. * I got a continous glucose monitor and set it up with juggluco (open source android app), which writes to health connect on my phone, and the android home assistant app reads it and exposes it as a sensor. So, now I have pretty graphs, and also figured out some nice ways to track related things. * I've got a solar install coming in the next few months, will share how managing all that looks in home assistant. Should be pretty nice. ## comments? additions? reactions? As always, comment on mastodon:
www.scrye.com
January 31, 2026 at 6:35 PM
Yesterday in #fedora infra land:
* Filed ticket to get 10G cards replaced after they arrived.
* Fixed up signing f45 and stuck vault to get updates flowing ( https://forge.fedoraproject.org/infra/tickets/issues/13093 )
* Infrastructure weekly meeting - 2026-01-29 9a
* Moved various hardware […]
Original post on fosstodon.org
fosstodon.org
January 30, 2026 at 10:19 PM
On wed in my #fedora land:
* Updated various hardware tickets with more info for replacements.
* 1x1 with ashcrow - 2026-01-28 10:30am
* Updated ticket on centos-stream rsyncing
* Ran chgrp on src.stg and closed related issue ( https://forge.fedoraproject.org/infra/tickets/issues/13076 )
* […]
Original post on fosstodon.org
fosstodon.org
January 30, 2026 at 7:56 PM
Reposted by nirik :fedora: :redhat:
Visit the Fedora booth at @fosdem in building H! We'll be super happy to see you!

We will also be part of the Distributions DevRoom on Sunday, and be on the look out for office hours with different teams with the Fedora Project.

Have fun!

#FOSDEM #fedora #linux #opensource
January 30, 2026 at 3:59 PM
Reposted by nirik :fedora: :redhat:
One thing I’m learning this season:

You don’t need to sprint every week to move forward.
Consistency beats urgency. Calm beats burnout.

#techlife #careergrowth #sustainablework #itlife
January 28, 2026 at 3:26 PM
Today in #fedora land for me:
* CLE high level sprint review and planning - 2026-01-27 7am
* Looked at stg pkgs perm issue ( https://forge.fedoraproject.org/infra/tickets/issues/13076 )
* Got bvmhost-s390x-01.stg booting again and working
* Merged and deployed some ansible PRs
* Requested […]
Original post on fosstodon.org
fosstodon.org
January 28, 2026 at 1:53 AM
Yesterday in my #fedora corner:
* bunch of meetings
* some flock estimating/planning/talk submitting (deadline is the 2nd!)
* Setup all stg virthosts to apply firmware updates on next boot.
* Setup all openqa machines to apply firmware on next boot.
* Upgraded both stg and prod openshift […]
Original post on fosstodon.org
fosstodon.org
January 28, 2026 at 1:51 AM
Another saturday recap of things going on in #fedora infra land.

This week includes: infra tickets migrated to forge, mass rebuild done, scrapers (again) and infra sprint planning coming soon to matrix.

https://www.scrye.com/blogs/nirik/posts/2026/01/24/misc-fedora-bits-for-third-week-of-jan-2026/
misc fedora bits for third week of jan 2026
Another week another recap here in longer form. I started to get all caught up from the holidays this week, but then got derailed later in the week sadly. ## Infra tickets migrated to new forejo forge On tuesday I migrated our https://pagure.io/fedora-infrastructure (pagure) repo over to https://forge.fedoraproject.org/infra/tickets/ (forgejo). Things went mostly smoothly, the migration tool is pretty slick and I borrowed a bunch from the checklist that the quality folks put together ( https://forge.fedoraproject.org/quality/tickets/issues/836 ) Thanks Adam and Kamil! There are still a few outstanding things I need to do: * We need to update our docs everywhere it mentions the old url, I am working on a pull request for that. * I cannot seem to get the fedora-messaging hook working right It might well be something I did wrong, but it is just not working * Of course no private issues migrated, hopefully someday (soon!) we will be able to just migrate them over once there's support in forgejo. * We could likely tweak the templates a bit more. Once I sort out the fedora-messaging hook I should be able to look at moving our ansible repo over, which will be nice. forgejo's pull request reviews are much nicer, and we may be able to leverage lots of other fun features there. ## Mass rebuild finished Even thought it started late (was supposed to start last wed, but didn't end up starting really until friday morning) it finished over the weekend pretty easily. There was some cleanup and such and then it was tagged in. I updated my laptop and everything just kept working. I would like to shout out that openqa caught a mozjs bug landing (again) that would have broken gdm, so that got untagged and sorted and I never hit it here. ## Scrapers redux Wed night I noticed that one of our two network links in the datacenter was topping out (10GB). I looked a bit, but marked it down to the mass rebuild landing and causing everyone to sync all of rawhide. Thursday morning there were more reports of issues with the master mirrors being very slow. Network was still saturated on that link (the other 10G link was only doing about 2-3GB/sec). On investigation, it turned out that scrapers were now scraping our master mirrors. This was bad because all the BW used downloading every package ever over http and was saturating the link. These seemed to mostly be what I am calling "type 1" scrapers. "type 1" are scrapers coming from clouds or known network blocks. These are mostly known in anubis'es list and it can just DENY them without too much trouble. These could also manually be blocked, but you would have to maintain the list(s). "type 2" are the worse kind. Those are the browser botnets, where the connections are coming from a vast diverse set of consumer ip's and also since they are just using someone elses computer/browser they don't care too much if they have to do a proof of work challenge. These are much harder to deal with, but if they are hitting specific areas, upping the amount of challenge anubis gives those areas helps if only to slow them down. First order of business was to setup anubis in front of them. There's no epel9 package for anubis, so I went with the method we used for pagure (el8) and just set it up using a container. There was a bit of tweaking around to get everything set, but I got it in place by mid morning and it definitely cut the load a great deal there. Also, at the same time it seems we had some config on download servers for prefork apache. Which, we have not used in a while. So, I cleaned all that up and updated things so their apache setup could handle lots more connections. The BW used was still high though, and a bit later I figured out why. The websites had been updated to point downloads of CHECKSUM files to the master mirrors. This was to make sure they were all coming from a known location, etc. However, accidentially _all_ artifact download links were pointing to the master mirrors. Luckly we could handle the load and also luckily there wasn't a release so it was less people downloading. Switching that back to point to mirrors got things happier. So, hopefully scrapers handled again... for now. ## Infra Sprint planning meeting So, as many folks may know, our Red Hat teams are all trying to use agile and scrum these days. We have various things in case anyone is interested: * We have daily standup notes from each team member in matrix. They submit with a bot and it posts to a team room. You can find them all in #cle-standups:fedora.im space on matrix. This daily is just a quick 'what did you do', 'what do you plan to do' any notes or blockers. * We have been doing retro/planning meetings, but those have been in video calls. However, there's no reason they need to be there, so I suggested and we are going to try and just meet on matrix for anyone interested. The first of these will be monday in the #meeting-3:fedoraproject.org room at 15UTC. We will talk about the last 2 weeks and plan for what planned things we want to try and get in the next 2. The forge projects boards are much nicer than the pagure boards were, and we can use them more effectively. Here's how it will work: Right now the current sprint is in: https://forge.fedoraproject.org/infra/tickets/projects/325 and the next one is in: https://forge.fedoraproject.org/infra/tickets/projects/326 On monday we will review the first, move everything that wasn't completed over to the second, add/tweak the second one then close the first one, rename the 'next' to 'current' and add a new current one. This will allow us to track what was done in which sprint and be able to populate things for the next one. Additionally, we are going to label tickets that come in and are just 'day-to-day' requests that we need to do and add those to the current sprint to track. That should help us get an idea of things that we are doing that we cannot plan for. Mass update/reboot outage =========================o Next week we are also going to be doing a mass update/reboot cycle with outage on thrusday. This is pretty overdue as we haven't done such since before the holidays. ## comments? additions? reactions? As always, comment on mastodon: https://fosstodon.org/@nirik/115951447954013009
www.scrye.com
January 24, 2026 at 6:19 PM
Today in my #fedora corner:
Looking at heavy network traffic on download servers (scrapers!)
Fedora Infrastructue meeting - 2026-01-22 - 9am
Deployed anubis to all the download servers to mitigate scraping.
Infra daily meeting - 2026-01-22 noon
Looked at proxies when they too were using a lot of […]
Original post on fosstodon.org
fosstodon.org
January 23, 2026 at 2:02 AM
So, looks like the AI scrapers have moved on to... #fedora download servers. ;(

Working on putting them behind anubis as a first step to see if that mitigates or if they are type2scrapers (botnet from captive browsers).

https://forge.fedoraproject.org/infra/tickets/issues/13075
Random timeouts at dl.fedoraproject.org
### Description of request Hello, We've recently been impacted by "failed to fetch key at https://dl. fedoraproject .org/pub/epel/RPM-GPG-KEY-EPEL-9 , error was: Request failed: ", this is showing up in all GNOME DCs both hosted on...
forge.fedoraproject.org
January 22, 2026 at 6:28 PM
yesterday in my #fedora land:
* Bunch o meetings
* Migrated https://pagure.io/fedora-infrastructure over to https://forge.fedoraproject.org/infra/tickets
* A bunch of things related to the move (making templates, updating all old tickets with link to new, adjusting various forgejo settings)
Overview - fedora-infrastructure - Pagure.io
pagure.io
January 21, 2026 at 10:01 PM
Reposted by nirik :fedora: :redhat:
Check out EasySpeak, "a privacy-focused voice control system for Linux desktops running GNOME/Wayland."

@matthartley shows this off on Fedora Linux!

➡️ https://www.youtube.com/watch?v=dl5m2Zo1oIE

#easyspeak #fedora #linux #accessibility
January 21, 2026 at 2:13 AM