I have a question regarding the vm allocation when using HugePages on a Linux (Exadata) system.
Typically the ratios such as vm.dirty_background_ratio and vm.dirty_ratio are set at 10 and 20 respectively on a newly installed system. These seems like good numbers. However, once adding HugePages to the system, these numbers seem to be overextended by the percent of RAM you have allocated to your HugePages.
For instance, say you have a system with 768G of memory. Lets pull 75% of that for HugePages.
768
-576 (75%)
= 192
This leaves 192G of the system available to normal processes.
When allocating your vm ratios, these ratios are based (from what I can tell) on the entire memory without taking into consideration the HugePages settings.
So, the system expects the following:
vm.dirty_background_ratio = 10 = 76.8G
vm.dirty_ratio = 20 = 154G.
But in reality, the system only has 192G free to work with. What we should be allocating would be the following (based on the 192G of free usable memory):
vm.dirty_background_ratio = 10 = 19.2G
vm.dirty_ratio = 20 = 38.4G
To get these numbers based on the full system memory, you would need to multiply by the actual space available (25%)
10 * (.25) = 2.5
20 * (.25) = 5
vm.dirty_background_ratio = 3 (round up the 2.5)
vm.dirty_ratio = 5
Question 1: Is my thought process correct with this analysis?
Question 2: If so, should we be looking at other sysctl variables using ratios to modify when dealing with HugePages?
Thanks!
-Chris