fjallrs.bsky.social
@fjallrs.bsky.social
I didn't know what I was in for
November 13, 2025 at 2:01 PM
3.0.0 pre.5 is out and brings journal compression support.

Here's 16K JSON blobs being written:
November 8, 2025 at 2:56 PM
Yesterday or so I ran a benchmark over night, and v3 scales much better for extremely large (100+ GB) databases
October 28, 2025 at 10:19 PM
Thanks 😭
October 26, 2025 at 12:10 PM
And here's another read-heavy bench with 4K values, 5% sync random updates and 95% Zipfian reads.

ReDB uses 4x more disk space, and writes 6x slower. Interestingly, it's also ~5x slower in point reads - I think ReDB is not handling large values well; haven't read too much into its implementation.
October 11, 2025 at 2:14 PM
95% Zipfian reads, 5% random updates on a Kingston PLP SSD with 16K JSON blobs - sled DNF! (OOM)

fjall uses LZ4 compression
October 9, 2025 at 5:21 PM
Blob GC working again (now fully automatic) - much less impact on foreground work (old GC was stop-the-world)
October 5, 2025 at 3:53 PM
Fun coloured bars and lines
October 2, 2025 at 5:09 PM
Yeah a little bit; new disk format, changed some APIs to be nicer etc. etc.
September 18, 2025 at 5:13 PM
I will never do a major release like this again
September 18, 2025 at 5:03 PM
Yeah, that's a bit better
September 6, 2025 at 10:02 PM
History repeats itself
August 9, 2025 at 11:42 PM
Lovely 3D plots to sweeten the day
August 7, 2025 at 4:57 PM
Refactoring a 3-level deep .iter().map() is something my future self will have to deal with
May 26, 2025 at 4:39 PM
And here's with a similarly tuned RocksDB

16K blocks, Zipfian point reads, with 64M cache

There are a bunch of reasons why: better file I/O (v3 does not use fseek anymore), skipping superfluous memcpys when reading & decompressing blocks, better block format that does not require a deser. step...
May 10, 2025 at 5:34 PM
Oh
May 9, 2025 at 6:59 PM
I'm sorry afl, my fuzz test is just very sophisticated
April 19, 2025 at 7:33 PM
1000??
April 18, 2025 at 5:23 PM
You really are having the time of your life, don't you?
April 10, 2025 at 11:58 PM
You too?
March 26, 2025 at 7:18 PM
Looks like replacing std::slice::partition_point will push point read performance a bit further for heavily cached workloads
March 11, 2025 at 5:48 PM
My previous methodology was a bit wrong, it was doing an additional heap allocation during benchmarking, this is it corrected now:
January 31, 2025 at 1:58 AM
Yup, not using jemalloc (falling back to Ubuntu default system allocator) shows almost a 50% improvement.

This is with large (32K) data blocks and LZ4 compression, the default block size should be more something around ~4µs per read.
January 24, 2025 at 5:28 PM
2.6.0 should give some nice read performance improvements, seeing something around 15-20% for uncached workloads.

This is using jemalloc; the impact could be higher for other memory allocators.
January 24, 2025 at 5:27 PM
Reading some Mark Callaghan blog post I noticed the level scaling in lsm-tree was kinda off... I changed it to be more similar to Rocks; it didn't make a huge difference surprisingly. But then I also improved the compaction picking algorithm and added parallel compactions (in the same level), and:
December 1, 2024 at 3:15 PM