NVMe SSD raid-z1 shows low throughput, limited by compression or just bad benchmarking? #16930
Unanswered
LunarLambda
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Built a raid-z1 out of 3x Samsung 990 PRO 2 TB (2 GiB DRAM cahce) NVMe drives.
With fio doing random I/O on 4 threads I hit about 160 MB/s read and write, and in rudimentary
dd
tests I hit 2.4 GB/s for write and 1.2 GB/s for read.zfs settings are
relatime=on xattr=sa dnodesize=auto compression=zstd normalization=formD acltype=posixacl
. Otherwise everything's default forzfs 2.2.7
on Linux 6.6.69.ashift=9
as the SSDs only support 512 sectors. I knowashift=12
might be better but I don't imagine I'm losing /that/ much to it.The SSDs are capable of much more (5-7.8 GB/s advertised), and are in a decently new machine (AMD 9800X3D, 32 GiB 6400 MT/s RAM), so either
zstd
is really cutting my throughput down or my testing methodology just sucks.I'd love to know a more rigorous way to test what performance I can get out of my zpool, or even just anecdotes about what kind of performance you can expect from a typical ZFS setup relative to the drives' rated speeds.
EDIT: I also threw kdiskmark at it with the NVMe SSD preset and that gave significantly better looking numbers:
If this is more representative of actual throughput I can see this being down to use
zstd
.Beta Was this translation helpful? Give feedback.
All reactions