Learn how to mine crypto on #ZPOOL using a #Bitaxe, #ESP32 (NerdMiner), or Raspberry Pi / Orange Pi 🛠️
ZPOOL auto-switches to the most profitable coin so you don’t have to.
#CryptoMining #RPI #ORANGEPI #Nerdminer
▶️ Watch now!
youtu.be/uBDjOjIbvYU
Learn how to mine crypto on #ZPOOL using a #Bitaxe, #ESP32 (NerdMiner), or Raspberry Pi / Orange Pi 🛠️
ZPOOL auto-switches to the most profitable coin so you don’t have to.
#CryptoMining #RPI #ORANGEPI #Nerdminer
▶️ Watch now!
youtu.be/uBDjOjIbvYU
FreeBSD 15.0 installed on the same zpool as Linux
I was trying this inside QEMU so I can learn how to do it and then install it on my main machine
FreeBSD 15.0 installed on the same zpool as Linux
I was trying this inside QEMU so I can learn how to do it and then install it on my main machine
Today, a #ZFS zpool is up to 87% capacity. What?
Oh yes, my full backups ran yesterday (first Sunday of the month).
This issue is partly why I'm moving-to-bigger zpools, perhaps done this week.
#Bacula
zpool scrub poolname
Finds and fixes silent corruption and errors—before they become real problems.
zpool scrub poolname
Finds and fixes silent corruption and errors—before they become real problems.
`zpool import` will show you the GUID of the problem pool. You can then use `zpool import -R /safepath $GUID BROKEN && zpool export BROKEN` to change the name recorded in the ZFS label. Might have to `-f` depending on how broken it is.
`zpool import` will show you the GUID of the problem pool. You can then use `zpool import -R /safepath $GUID BROKEN && zpool export BROKEN` to change the name recorded in the ZFS label. Might have to `-f` depending on how broken it is.
Ever have one of those "duh" moments? Yea I did. The reason why the upgrade failed last time was because I forgot to boot off the correct SSD and not the newly replaced drive in the Zpool. Only took me one reboot this time to remember there are 2 different […]
Ever have one of those "duh" moments? Yea I did. The reason why the upgrade failed last time was because I forgot to boot off the correct SSD and not the newly replaced drive in the Zpool. Only took me one reboot this time to remember there are 2 different […]
I prefer raidz2 over z1 - just that bit extra.
www.raidz-calculator.com tells me that 8 x 4TB SSDs will give me a 24TB zpool.
I prefer raidz2 over z1 - just that bit extra.
www.raidz-calculator.com tells me that 8 x 4TB SSDs will give me a 24TB zpool.
I got decisions to make now that all this stuff has come together.
dan.langille.org/2025/11/26/c...
#FreeBSD #ZFS
I got decisions to make now that all this stuff has come together.
dan.langille.org/2025/11/26/c...
#FreeBSD #ZFS
Well, perhaps it might scrub one day, but it didn't scrub last night.
bugs.freebsd.org/bugzilla/sho...
Well, perhaps it might scrub one day, but it didn't scrub last night.
bugs.freebsd.org/bugzilla/sho...
I moved zroot from a larger zpool to a smaller zpool. It is now a very straight forward process. The hardest part may be making use the old zroot no longer boots.
dan.langille.org/2025/11/23/t...
#FreeBSD #ZFS
I moved zroot from a larger zpool to a smaller zpool. It is now a very straight forward process. The hardest part may be making use the old zroot no longer boots.
dan.langille.org/2025/11/23/t...
#FreeBSD #ZFS
This shrinks a zpool by creating a new one on a different pair of drives in a zroot mirror.
In short:
zfs snapshot
zfs send | zfs receive
profit
dan.langille.org/2025/11/22/t...
#FreeBSD #ZFS
This shrinks a zpool by creating a new one on a different pair of drives in a zroot mirror.
In short:
zfs snapshot
zfs send | zfs receive
profit
dan.langille.org/2025/11/22/t...
#FreeBSD #ZFS
Any suggestions for nice little computers with 6 or so m2 NVMe slots?
Any suggestions for nice little computers with 6 or so m2 NVMe slots?
[Original post on mastodon.jamesoff.net]
[Original post on mastodon.jamesoff.net]
dan.langille.org/2025/11/19/m...
I went from two SATADOM drives to two SSDs.
From my local cafe, with coffee, like an adult.
For my next trick, I will move the zoot from 2x 1TB drives to 2x 128GB drives.
#FreeBSD #zfs
dan.langille.org/2025/11/19/m...
I went from two SATADOM drives to two SSDs.
From my local cafe, with coffee, like an adult.
For my next trick, I will move the zoot from 2x 1TB drives to 2x 128GB drives.
#FreeBSD #zfs
`zfs set quota=15G zroot/var/log`
That is similar to creating a separate zpool of size 15G and moving /var/log over to that zpool.
3/3
`zfs set quota=15G zroot/var/log`
That is similar to creating a separate zpool of size 15G and moving /var/log over to that zpool.
3/3
pool: zroot
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
ada1p3 ONLINE 0 0 0
errors: No known data errors
..
pool: zroot
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
ada1p3 ONLINE 0 0 0
errors: No known data errors
..
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zroot 107G 1.01G 106G - - 0% 0% 1.00x ONLINE -
...
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zroot 107G 1.01G 106G - - 0% 0% 1.00x ONLINE -
...