#zpool
bectl: log modifying functions to zpool history

Interest | Match | Feed
Origin
cgit.freebsd.org
December 17, 2025 at 5:14 PM
🚀 New Video!
Learn how to mine crypto on #ZPOOL using a #Bitaxe, #ESP32 (NerdMiner), or Raspberry Pi / Orange Pi 🛠️
ZPOOL auto-switches to the most profitable coin so you don’t have to.
#CryptoMining #RPI #ORANGEPI #Nerdminer

▶️ Watch now!
youtu.be/uBDjOjIbvYU
HOW TO MINE crypto coins on ZPOOL with a Bitaxe, ESP32 or a Pi | Orange Pi Raspberry Pi Nerdminer
YouTube video by Bloxy Labs
youtu.be
December 14, 2025 at 9:47 AM
guy on the frebesd forums complaining about how his 11 year old laptop gets unusable when they rsync files onto their zpool and extrapolates this to "is that all it takes to bring a freebsd server to its knees" and like, no, bro, writing to a full spinning rust pool with copies=2 is going to […]
Original post on infosec.exchange
infosec.exchange
December 13, 2025 at 9:44 PM
Setting up my first encrypted zfs pool is going well except for the part where I clobbered one of the wrong drives. At least it was only one of them. I don't think I needed anything on it but it did have things on it. Quite dislike that zpool create automatically runs in the background with no […]
Original post on chitter.xyz
chitter.xyz
December 12, 2025 at 4:02 AM
They're basically all I use for big home pools, just check the smart stats and make sure things aren't too ugly. If you want to be really cautious could make a zpool with raidz2(s) instead
December 11, 2025 at 3:51 AM
Do you ever fuck up a ctrl+r and just yeet an entire drive from your zpool in the process? Yeah, me neither
December 11, 2025 at 1:16 AM
This took me hours but I've got IT
FreeBSD 15.0 installed on the same zpool as Linux

I was trying this inside QEMU so I can learn how to do it and then install it on my main machine
December 10, 2025 at 5:28 PM
I've said this before. Good software is software you forget you're running.
Today, a #ZFS zpool is up to 87% capacity. What?
Oh yes, my full backups ran yesterday (first Sunday of the month).
This issue is partly why I'm moving-to-bigger zpools, perhaps done this week.

#Bacula
December 8, 2025 at 4:36 PM
Protect your data with a simple command:
zpool scrub poolname
Finds and fixes silent corruption and errors—before they become real problems.
December 8, 2025 at 3:08 PM
Well, it can be fixed if you don't reboot first!

`zpool import` will show you the GUID of the problem pool. You can then use `zpool import -R /safepath $GUID BROKEN && zpool export BROKEN` to change the name recorded in the ZFS label. Might have to `-f` depending on how broken it is.
December 2, 2025 at 6:32 PM
Holonet SSD space has been increased!

Ever have one of those "duh" moments? Yea I did. The reason why the upgrade failed last time was because I forgot to boot off the correct SSD and not the newly replaced drive in the Zpool. Only took me one reboot this time to remember there are 2 different […]
Original post on holonet.imperialba.se
holonet.imperialba.se
November 28, 2025 at 9:09 PM
I thought about raidz2 last night.

I prefer raidz2 over z1 - just that bit extra.

www.raidz-calculator.com tells me that 8 x 4TB SSDs will give me a 24TB zpool.
Free RAIDZ Calculator - Caclulate ZFS RAIDZ Array Capacity and Fault Tolerance.
Online RAIDz calculator to assist ZFS RAIDz planning. Calculates capacity, speed and fault tolerance characteristics for a RAIDZ0, RAIDZ1, and RAIDZ3 setups.
www.raidz-calculator.com
November 27, 2025 at 3:03 PM
One big 16TB zpool (8 x 4TB SSDs) or 2 x 8TB zpools?

I got decisions to make now that all this stuff has come together.

dan.langille.org/2025/11/26/c...

#FreeBSD #ZFS
Creating a new zpool for r730-01 – Dan Langille's Other Diary
dan.langille.org
November 26, 2025 at 9:19 PM
Today I learned that periodic/daily/800.scrub: does not initiate a scrub on a new zpool (i.e. a zpool which has never been scrubbed).

Well, perhaps it might scrub one day, but it didn't scrub last night.

bugs.freebsd.org/bugzilla/sho...
Making sure you're not a bot!
bugs.freebsd.org
November 26, 2025 at 12:43 PM
still some bookkeeping to do but i have nvidia drivers working, my zpool operational, and my steam library ON my zpool (and guild wars 2 works!)
November 25, 2025 at 4:57 AM
The last test before doing this on production.

I moved zroot from a larger zpool to a smaller zpool. It is now a very straight forward process. The hardest part may be making use the old zroot no longer boots.

dan.langille.org/2025/11/23/t...

#FreeBSD #ZFS
Test run – moving to a SATADOM based zpool (zroot) using zfs snapshot and send | recv – Dan Langille's Other Diary
dan.langille.org
November 23, 2025 at 4:48 PM
I know many others have done this, but this is my first test procedure.

This shrinks a zpool by creating a new one on a different pair of drives in a zroot mirror.

In short:

zfs snapshot
zfs send | zfs receive
profit

dan.langille.org/2025/11/22/t...

#FreeBSD #ZFS
Test run – moving to a smaller zpool (zroot) using zfs snapshot and send | recv – Dan Langille's Other Diary
dan.langille.org
November 23, 2025 at 12:11 AM
Well the disk passed the full write/read test but trying to use it as part of the zpool without fail causes failures (both of itself and/or other disks) so I suspect the Beelink just can’t power them all properly.

Any suggestions for nice little computers with 6 or so m2 NVMe slots?
November 22, 2025 at 7:55 PM
Added a couple more NVMe drives to the Beelink Mini ME (early Black Friday ftw) and while expanding the zpool it dropped one, and then weirdly after a power cycle most of them (luckily not the boot one). Suspect one wasn’t seated properly as it showed […]

[Original post on mastodon.jamesoff.net]
November 22, 2025 at 1:25 PM
The first part of the test for reducing a zpool size has completed.

dan.langille.org/2025/11/19/m...

I went from two SATADOM drives to two SSDs.

From my local cafe, with coffee, like an adult.

For my next trick, I will move the zoot from 2x 1TB drives to 2x 128GB drives.

#FreeBSD #zfs
Moving a zpool from larger drives #ZFS #FreeBSD – Dan Langille's Other Diary
dan.langille.org
November 19, 2025 at 2:54 PM
I love how easy it is to expand ZFS storage pools. Hot plug disk, run "zpool add zroot device-name" and the disk is immediately part of the pool. No formatting, no changes to fstab, no resizing the filesystem. Been using ZFS for over 15 years and it still feels like magic.
November 19, 2025 at 12:33 PM
I think my solution may be:

`zfs set quota=15G zroot/var/log`

That is similar to creating a separate zpool of size 15G and moving /var/log over to that zpool.

3/3
November 17, 2025 at 3:46 PM
dvl@r730-04:~ $ zpool status
pool: zroot
state: ONLINE
config:

NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
ada1p3 ONLINE 0 0 0

errors: No known data errors
..
November 16, 2025 at 6:05 PM
dvl@r730-04:~ $ zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zroot 107G 1.01G 106G - - 0% 0% 1.00x ONLINE -
...
November 16, 2025 at 6:05 PM
First time for me booting UEFI. This was a proof-of-concept before I try moving the zroot of the main server from 2x 2.5" SSDs to these babies. Instead of zfs replace (which requires the incoming device to be >= in size), I'll using zpool add to move to a smaller zpool.
November 16, 2025 at 5:56 PM