Diaper Regression
banner
diaperregression.com
Diaper Regression
@diaperregression.com
ABDL / Pan / Diaper lover / bed wetter

🔞 #abdl
pfp: @classyshrimp.bsky.social
You’re simply wrong, educate yourself on the protect act of 2003, which does make this illegal, and has been used successfully to charge individuals with loli/shota in the US. Outside of the US I’m not sure the legal protections www.congress.gov/bill/108th-c...
www.congress.gov
May 11, 2025 at 6:53 PM
I’m pretty sure I know this reference, is it related to a certain pop culture cyber-security event?
February 3, 2025 at 1:38 AM
I’m sure most won’t care, but if you’re curious for me info let me know. It’s super satisfying to have your own manga website, and be able to rip from the web to yours with full metadata just by sending the link, on iOS I even have a shortcut so I just share the webpage to the shortcut and it goes
December 29, 2024 at 3:42 PM
I may release this to GitHub one day, the code pulls everything needed from the app settings file so it can be adjusted for your setup. Might look into how to wrap it around a docker image for a one stop setup, that way you can deploy with kavita in one compose file
December 29, 2024 at 3:42 PM
Having had a few days off, this gave me the time to really dig into this new concept for me, and I loved it. I can pull from 5 major sources, that if I can’t find it anywhere there it’s nowhere online
December 29, 2024 at 3:42 PM
Once all this is confirmed, it then connects to the api for my kavita instance and triggers a scan for the library to rebuild the data on the server. This makes sure I don’t have to manually scan and it just magically shows up
December 29, 2024 at 3:42 PM
It will then gather the page sources, and download them in parallel. I found using 10 threads is safe that sites won’t IP ban for scraping. Once all are in a directory, it exports the comic info to xml, zips the directory into tue cbz format, marks it hidden, and moves it to the library directory
December 29, 2024 at 3:42 PM
First thing the scraper does is build the comic info file, which will add metadata. The sites I pull from all have this already in their own system for tags, parodies, characters, and artists and what not so I just define the part of the page to scrape and it builds all this out
December 29, 2024 at 3:42 PM
First thing it does is figure out what the provider is, and if I’ve created a scraper for it. After doing one site or two, I was able to generalize the scrapers so implementing a new site is very easy.
December 29, 2024 at 3:42 PM
Once the host receives the url, it adds it to a table to queue for download since that can take some time. A background task then pings that table once a minute, and looks for new things to download. It then goes one by one and processes the links
December 29, 2024 at 3:42 PM
First thing to do was setup a .net web api endpoint I can hit from anywhere, that takes in a POST request with a json containing the url of the hentai manga I want to download. This will allow me to run this web api on my home pc, and shot urls to it from my phone anywhere using my reverse proxy
December 29, 2024 at 3:42 PM
Before, I would just download it manually and drop it into the folder that it pulls from. Well, now I have a much better way so I don’t have to do it from my computer. Warning, I’m about to nerd out a bit about infrastructure and technology I’m using.
December 29, 2024 at 3:42 PM