You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, RlpMemo needs to Decompress data first to work on it. Then, it needs to Compress it before applying on the database. It could be cheaper and efficient if it could work on compressed data directly, and copy them over when one of the children is deleted/added and the underlying RlpMemo structure requires amendments (adding or removing 32 bytes when nodes are deleted added).
The cost of Compress and Decompress is paid by various components.
Decompress is used by ComputeMerkleBehavior when it reads a Merkle Branch that was not previously modified during a given block and needs to be amended. This is now partially amortized by the Prefetcher component that copies the data locally calling the decompression when it's needed. This helps a lot, but if we could ditch the decompression altogether we could either speed up the Merkle Prefetcher or abandon it altogether (smaller cost to be paid by merkleization that for storage is highly parallel).
Compress is called whenever the data are applied on the database. This is done by InspectBeforeApply and right now is not in the hot path of the execution. It's called by .FlusherTask so does not introduce an immediate penalty. There's a pending PR though #417 that moves it to the hot path, by calling .Apply method in the main thread. Whether its whole execution or at least a part of it can be offloaded is a subject to discuss, having no Compress call at all, would not make it slower.
The text was updated successfully, but these errors were encountered:
Currently,
RlpMemo
needs toDecompress
data first to work on it. Then, it needs toCompress
it before applying on the database. It could be cheaper and efficient if it could work on compressed data directly, and copy them over when one of the children is deleted/added and the underlyingRlpMemo
structure requires amendments (adding or removing 32 bytes when nodes are deleted added).The cost of
Compress
andDecompress
is paid by various components.Decompress
is used byComputeMerkleBehavior
when it reads a MerkleBranch
that was not previously modified during a given block and needs to be amended. This is now partially amortized by thePrefetcher
component that copies the data locally calling the decompression when it's needed. This helps a lot, but if we could ditch the decompression altogether we could either speed up the MerklePrefetcher
or abandon it altogether (smaller cost to be paid by merkleization that for storage is highly parallel).Compress
is called whenever the data are applied on the database. This is done byInspectBeforeApply
and right now is not in the hot path of the execution. It's called by.FlusherTask
so does not introduce an immediate penalty. There's a pending PR though #417 that moves it to the hot path, by calling.Apply
method in the main thread. Whether its whole execution or at least a part of it can be offloaded is a subject to discuss, having noCompress
call at all, would not make it slower.The text was updated successfully, but these errors were encountered: