Poor disk performance #11591
Unanswered
mcouture87
asked this question in
Q&A
Replies: 2 comments 7 replies
-
if going back to ext4 works for you, then you should just do that. DVRs don't really need ZFS. |
Beta Was this translation helpful? Give feedback.
7 replies
-
There's not enough information for a good answer. Can you describe the hardware? And so on.
Re: https://linux-hardware.org/ the result of a probe will be ideal. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have a Linux Mint 20.10 machine that is a home file server and also hosts DVR services for the house. I the volume on EXT4 with one drive for years and it performed great. I moved to ZFS and added a second drive and created a mirrored VDEV and now I am getting horrible disk performance. My IOWAITs hover at 10% with one DVR stream (ffmpeg) and if I try to copy files from one directory to another IOWAITS will climb to 80% in a matter of minutes.
I cannot reliably watch TV anymore as the DVR app cannot keep up the streams to the TVs (AppleTVs) anymore....(some TVs have to transcode due to remote viewing)
I created the zpools using 0.83 zfs (Ubuntu kernel built in zfs version)... performance was slow but DVR was fine.
I just updated to zfs 2.0.2 using Jonathanf's PPA. Imported zpool without upgrading it... DVR can't keep up now
I'm not sure where to take this now. If I can't solve the performance issue, I will probably go back to EXT4. Here is my config, hopefully someone finds an easy fix...
I have an AMD 2400G with 32GB RAM and 2 Seagate IronWolf Pro 6TB drives
CPU averages 20% or less
Memory usages is 50%
NAME PROPERTY VALUE SOURCE
datapool size 5.45T -
datapool capacity 57% -
datapool altroot - default
datapool health ONLINE -
datapool guid 17848209554314143902 -
datapool version - default
datapool bootfs - default
datapool delegation on default
datapool autoreplace off default
datapool cachefile - default
datapool failmode wait default
datapool listsnapshots off default
datapool autoexpand off default
datapool dedupratio 1.00x -
datapool free 2.31T -
datapool allocated 3.15T -
datapool readonly off -
datapool ashift 0 default
datapool comment - default
datapool expandsize - -
datapool freeing 0 -
datapool fragmentation 9% -
datapool leaked 0 -
datapool multihost off default
datapool checkpoint - -
datapool load_guid 13893450904409544856 -
datapool autotrim off default
datapool feature@async_destroy enabled local
datapool feature@empty_bpobj active local
datapool feature@lz4_compress active local
datapool feature@multi_vdev_crash_dump enabled local
datapool feature@spacemap_histogram active local
datapool feature@enabled_txg active local
datapool feature@hole_birth active local
datapool feature@extensible_dataset active local
datapool feature@embedded_data active local
datapool feature@bookmarks enabled local
datapool feature@filesystem_limits enabled local
datapool feature@large_blocks active local
datapool feature@large_dnode enabled local
datapool feature@sha512 enabled local
datapool feature@skein enabled local
datapool feature@edonr enabled local
datapool feature@userobj_accounting active local
datapool feature@encryption enabled local
datapool feature@project_quota active local
datapool feature@device_removal enabled local
datapool feature@obsolete_counts enabled local
datapool feature@zpool_checkpoint enabled local
datapool feature@spacemap_v2 active local
datapool feature@allocation_classes enabled local
datapool feature@resilver_defer enabled local
datapool feature@bookmark_v2 enabled local
datapool feature@redaction_bookmarks disabled local
datapool feature@redacted_datasets disabled local
datapool feature@bookmark_written disabled local
datapool feature@log_spacemap disabled local
datapool feature@livelist disabled local
datapool feature@device_rebuild disabled local
datapool feature@zstd_compress disabled local
cat /etc/modprobe.d/zfs.conf
options zfs zfs_arc_max=8589934592
cat /proc/spl/kstat/zfs/arcstats
13 1 0x01 116 31552 5600546171 56297461616523
name type data
hits 4 35418417
misses 4 239581
demand_data_hits 4 1274086
demand_data_misses 4 39009
demand_metadata_hits 4 34032080
demand_metadata_misses 4 171629
prefetch_data_hits 4 5913
prefetch_data_misses 4 25734
prefetch_metadata_hits 4 106338
prefetch_metadata_misses 4 3209
mru_hits 4 15759390
mru_ghost_hits 4 17356
mfu_hits 4 19553954
mfu_ghost_hits 4 3244
deleted 4 120876
mutex_miss 4 6
access_skip 4 2
evict_skip 4 9
evict_not_enough 4 4
evict_l2_cached 4 0
evict_l2_eligible 4 112078066176
evict_l2_ineligible 4 14263227392
evict_l2_skip 4 0
hash_elements 4 167766
hash_elements_max 4 167809
hash_collisions 4 69153
hash_chains 4 3226
hash_chain_max 4 2
p 4 7668583936
c 4 8589934592
c_min 4 984389760
c_max 4 8589934592
size 4 8611324536
compressed_size 4 5890527232
uncompressed_size 4 8514649600
overhead_size 4 875291648
hdr_size 4 58001624
data_size 4 5826797056
metadata_size 4 939021824
dbuf_size 4 406999296
dnode_size 4 1010020896
bonus_size 4 327319680
anon_size 4 16257536
anon_evictable_data 4 0
anon_evictable_metadata 4 0
mru_size 4 5621927936
mru_evictable_data 4 4482911232
mru_evictable_metadata 4 125950976
mru_ghost_size 4 2975560192
mru_ghost_evictable_data 4 2315230720
mru_ghost_evictable_metadata 4 660329472
mfu_size 4 1127633408
mfu_evictable_data 4 943528448
mfu_evictable_metadata 4 14465536
mfu_ghost_size 4 5558424576
mfu_ghost_evictable_data 4 4536541184
mfu_ghost_evictable_metadata 4 1021883392
l2_hits 4 0
l2_misses 4 0
l2_feeds 4 0
l2_rw_clash 4 0
l2_read_bytes 4 0
l2_write_bytes 4 0
l2_writes_sent 4 0
l2_writes_done 4 0
l2_writes_error 4 0
l2_writes_lock_retry 4 0
l2_evict_lock_retry 4 0
l2_evict_reading 4 0
l2_evict_l1cached 4 0
l2_free_on_write 4 0
l2_abort_lowmem 4 0
l2_cksum_bad 4 0
l2_io_error 4 0
l2_size 4 0
l2_asize 4 0
l2_hdr_size 4 0
l2_log_blk_writes 4 0
l2_log_blk_avg_asize 4 0
l2_log_blk_asize 4 0
l2_log_blk_count 4 0
l2_data_to_meta_ratio 4 0
l2_rebuild_success 4 0
l2_rebuild_unsupported 4 0
l2_rebuild_io_errors 4 0
l2_rebuild_dh_errors 4 0
l2_rebuild_cksum_lb_errors 4 0
l2_rebuild_lowmem 4 0
l2_rebuild_size 4 0
l2_rebuild_asize 4 0
l2_rebuild_bufs 4 0
l2_rebuild_bufs_precached 4 0
l2_rebuild_log_blks 4 0
memory_throttle_count 4 0
memory_direct_count 4 0
memory_indirect_count 4 0
memory_all_bytes 4 31500472320
memory_free_bytes 4 7156772864
memory_available_bytes 3 6002009984
arc_no_grow 4 0
arc_tempreserve 4 0
arc_loaned_bytes 4 0
arc_prune 4 231
arc_meta_used 4 2741363320
arc_meta_limit 4 6442450944
arc_dnode_limit 4 644245094
arc_meta_max 4 2972621216
arc_meta_min 4 16777216
async_upgrade_sync 4 13427
demand_hit_predictive_prefetch 4 12119
demand_hit_prescient_prefetch 4 0
arc_need_free 4 0
arc_sys_free 4 1154762880
arc_raw_size 4 0
cached_only_in_progress 4 0
abd_chunk_waste_size 4 43164160
cat /proc/meminfo
MemTotal: 30762180 kB
MemFree: 4662312 kB
MemAvailable: 14624908 kB
Buffers: 2398332 kB
Cached: 6226312 kB
SwapCached: 0 kB
Active: 9920952 kB
Inactive: 2328936 kB
Active(anon): 3555528 kB
Inactive(anon): 17448 kB
Active(file): 6365424 kB
Inactive(file): 2311488 kB
Unevictable: 32 kB
Mlocked: 32 kB
SwapTotal: 8388604 kB
SwapFree: 8388604 kB
Dirty: 24 kB
Writeback: 0 kB
AnonPages: 3625372 kB
Mapped: 460516 kB
Shmem: 18656 kB
KReclaimable: 1741308 kB
Slab: 6442768 kB
SReclaimable: 1741308 kB
SUnreclaim: 4701460 kB
KernelStack: 17296 kB
PageTables: 25624 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 23769692 kB
Committed_AS: 9252352 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 1442096 kB
VmallocChunk: 0 kB
Percpu: 35584 kB
HardwareCorrupted: 0 kB
AnonHugePages: 114688 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
FileHugePages: 0 kB
FilePmdMapped: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Hugetlb: 0 kB
DirectMap4k: 616920 kB
DirectMap2M: 17154048 kB
DirectMap1G: 14680064 kB
Total DISK READ: 5.56 M/s | Total DISK WRITE: 3.52 M/s
Current DISK READ: 3.97 K/s | Current DISK WRITE: 5.59 M/s
PID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
1254 be/4 root 0.00 B/s 0.00 B/s 0.00 % 6.68 % [txg_sync]
3970242 be/4 dvruser 3.97 K/s 1928.37 K/s 0.00 % 0.72 % ffmpeg -hide_banner -nostats -loglevel
168.100.54-292057528/remux/stream.m3u8ding=UTF-8 com.ubnt.ace.Launcher start365 be/3 root 0.00 B/s 10.32 K/s 0.00 % 0.01 % [jbd2/nvme0n1p2-]
2037 be/4 mongodb 0.00 B/s 7.14 K/s 0.00 % 0.00 % mongod --unixSocketPrefix=/run/mongodb --config /etc/mongodb.conf
2373 be/4 unifi 0.00 B/s 3.17 K/s 0.00 % 0.00 % unifi -cwd /usr/lib/unifi -home /usr/l
3211 be/4 unifi 0.00 B/s 5.56 K/s 0.00 % 0.00 % java -Dfile.encoding=UTF-8 -Djava.awt.~ -jar /usr/lib/unifi/lib/ace.jar start
3372 be/4 dvruser 5.55 M/s 461.91 K/s 0.00 % 0.00 % channels-dvr
3970246 ?dif dvruser 0.00 B/s 1183.53 K/s 0.00 % 0.00 % ffmpeg -hide_banner -nostats -loglevel~057528/encoder-1-520933303/stream.m3u8
4061737 be/4 root 0.00 B/s 812.70 B/s 0.00 % 0.00 % python3 /usr/bin/glances
Beta Was this translation helpful? Give feedback.
All reactions