Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RlpMemo should work on compressed data #445

Open
Scooletz opened this issue Dec 9, 2024 · 0 comments · May be fixed by #456
Open

RlpMemo should work on compressed data #445

Scooletz opened this issue Dec 9, 2024 · 0 comments · May be fixed by #456
Assignees
Labels
🐌 performance Perofrmance related issue

Comments

@Scooletz
Copy link
Contributor

Scooletz commented Dec 9, 2024

Currently, RlpMemo needs to Decompress data first to work on it. Then, it needs to Compress it before applying on the database. It could be cheaper and efficient if it could work on compressed data directly, and copy them over when one of the children is deleted/added and the underlying RlpMemo structure requires amendments (adding or removing 32 bytes when nodes are deleted added).

The cost of Compress and Decompress is paid by various components.

Decompress is used by ComputeMerkleBehavior when it reads a Merkle Branch that was not previously modified during a given block and needs to be amended. This is now partially amortized by the Prefetcher component that copies the data locally calling the decompression when it's needed. This helps a lot, but if we could ditch the decompression altogether we could either speed up the Merkle Prefetcher or abandon it altogether (smaller cost to be paid by merkleization that for storage is highly parallel).

Compress is called whenever the data are applied on the database. This is done by InspectBeforeApply and right now is not in the hot path of the execution. It's called by .FlusherTask so does not introduce an immediate penalty. There's a pending PR though #417 that moves it to the hot path, by calling .Apply method in the main thread. Whether its whole execution or at least a part of it can be offloaded is a subject to discuss, having no Compress call at all, would not make it slower.

@Scooletz Scooletz added the 🐌 performance Perofrmance related issue label Dec 9, 2024
@dipkakwani dipkakwani self-assigned this Dec 10, 2024
@dipkakwani dipkakwani linked a pull request Dec 27, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐌 performance Perofrmance related issue
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants