Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lock internal and principal keys in RAM #279

Merged
merged 3 commits into from
Sep 18, 2024
Merged

Conversation

dAdAbird
Copy link
Member

@dAdAbird dAdAbird commented Sep 12, 2024

This PR makes internal and principal keys cache data being locked in RAM to prevent this data from being paged to the swap.

For the internal keys, as memory locking is performed in units of whole pages, this commit also redesigns the cache so all records are compactly placed in pages. Before it was a linked list with nodes in random places. So locking each key would mean potential "wasting" of the locked page as its number is limited (although on modern systems it is a fairly big number) and we can't be sure that the next record would be on the same page. Also, having records sequentially placed makes iterations through them CPU cache friendly in contrast to random memory pointers of linked lists.

For the principal keys, we just lock the hash table entry. Although it will lock the whole mem page we ok with that potential waste. The number of principal keys should be less than internal and allocated in the shared memory once for all backends (unlike internal keys in the TopMemoryContext of every backend). Plus hash table (via dsa) allocates in 4kb pages and hopefully, principal keys end up compactly placed.

For https://perconadev.atlassian.net/browse/PG-823

@dAdAbird dAdAbird force-pushed the mlock branch 2 times, most recently from 9b3dfca to da4718d Compare September 13, 2024 17:46
@dAdAbird dAdAbird changed the title Lock internal keys in RAM Lock internal and principal keys in RAM Sep 13, 2024
This commit makes internal keys cache data being locked in RAM to
prevent this data from being paged to the swap.

As memory locking is performed in units of whole pages, this commit
also redesigns the cache so all records are compactly placed in pages.
Before it was a linked list with nodes in random places. So locking
each key would mean potential "wasting" of the locked page as its
number is limited (although on modern systems it is a fairly big number)
and we can't be sure that the next record would be on the same page.
Also, having records sequentially placed makes iterations through them
CPU cache friendly in contrast to random memory pointers of linked
lists.
We just lock the hash table entry. Although it will lock the whole mem
page we ok with that potential waste. The number of principal keys
should be less than internal and allocated in the shared memory once
for all backends (unlike internal keys in the TopMemoryContext of every
backend). Plus hash table (via dsa) allocates in 4kb pages and
hopefully, principal keys end up compactly placed.
@dAdAbird dAdAbird merged commit f030366 into percona:main Sep 18, 2024
13 checks passed
@dAdAbird dAdAbird deleted the mlock branch September 18, 2024 11:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants