zfs send -I - sent over network data in RAW format #16928
Replies: 5 comments 1 reply
-
It look like it is no longer possible to send deduplicated stream anymore. See 'man zfs-send', 'man zstreamdump'.
I would also suggest to send with '-L' to keep large blocks >128k, but they may be already broken on destination, so don't do that on incremental send if original send wasn't using that. Btw, 'raw send' is reserved for sending encrypted zfs stream without decryption. I don't know why sending deduplicated stream was deprecated. Also, the same issue will happen if sending zfs stream with ref-links AFAIK. |
Beta Was this translation helpful? Give feedback.
-
Thanks @IvanVolosyuk - I saw that information about --dedup - and it's strange for me. This way we lost a full potential of ZFS + dedup + compression. About reflink I didn't hear - if it also has input to replication, it's huge not good performance information :( and using |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
@amotin use Because my zpool allocated only 367G These above data are with zfs send with parameters: I afraid. because dedup ratio now is 62:1, so now, on zpool I need 367G space, but run first replication between new zpool, I need to transfer 21,2TB, RAW blockdevice for that zpool has disc 1TB, so the better option will be transfer whole block device, than use zfs send :( |
Beta Was this translation helpful? Give feedback.
-
It does not. It is merely a coincidence. Zpool ALLOC value reports space used on pool after dedup, but for all data, while
True. But I would instead reconsider how this backup works, since looking on the huge 62:1 ratio, either you back up dozens of identical hosts or the backup is not even close to being incremental. |
Beta Was this translation helpful? Give feedback.
-
Hi,
I created environment base on 2 servers (S)ource and (T)arget, both with zfs-2.2.7-1~bpo12+1 (Debian).
On (S) and (T) I created: zpool create -O dedup=on -O compression=lz4 -O canmount=on -f vd raidz /dev/sdb /dev/sdc
on (S) zpool is mounted, and I write data on it, on (T) zpool is umount.
Next I want to use
zfs send-I snap-on-S-1 snap-on-S-2 | ssh (T) "zfs receive -Fs vd"
data from (S) to (T). Everything works, but I observed that the amount of data transferred over LAN is huge.Please look:
zpool list
I have 2 snapshots:
Size:
difference between them is 300GB.
It suggests, that
zfs send
will send over network rededuplication and decompression data, next on (T) again deduplicat and compress it.And I see, that transfer over network is real 300GB - over 1Gbs LAN it needs about 1 hour.
I expected transfer over network between two zpool with dedup and compression data "after" dedup and compression, not it RAW format - if it needs transfer RAW format, it's not performance optimisation :(
Maybe I set/configure something wrong, or
zfs send
"that's how it works" - is it true?Beta Was this translation helpful? Give feedback.
All reactions