Avi and Dude -
I opened an SR about the inability to allocate memory_target > 256GB - but we haven't made much headway so far - so I decided to try huge pages in the meantime.
With 1TB of RAM I allocated 450GB to huge pages (225,000 pages of 2M), and allocated an SGA of 400GB.
This worked fine - but here's what happened. Our tables are multi-TB, so while Oracle uses the SGA as best it can - most IO is from disk, and having lots of RAM allows OEL / UEK to assist the reads in a meaningful way. The OS is really capable that way. You can get read rates of as much as 2X the physical IO capacity of the IO subsystem of the server through the OS caching.
The thing is - this requires leaving the OS ample free RAM - so typically the default tmpfs setting of 50% is good.
However - here - I took away almost half the RAM for huge pages - so there is much less RAM for the OS to use as assist - and read rates are about 1/2 as before.
So the SGA might be more efficient - but the reads have taken a hit. Also - large queries seem to run slower when explicitly specifying SGA and PGA versus AMM - though more testing is required to confirm this.
So this is the dilemma - I completely understand that AMM => 1k pages => inefficient memory management for the SGA. However - leaving the entire RAM for the OS to use as cache and for AMM to use /dev/shm as it pleases - seems to - runtime wise - seem to be extremely efficient.
Thoughts? Suggestions? I am still pursuing being able to create a memory_target > 256GB - because it should work - efficient or not - but I also want to make sure I give huge_pages the proper testing.
Also - maybe on future servers with 2TB of RAM - I can dedicate 1 TB to enhancing the OS cache effect and another 1TB to SGA/PGA where I can use huge_pages for the SGA part. Nevertheless - this is still subject to confirming that by manually specifying SGA and PGA sizes - we can get just as good performance as giving it all to AMM to split between the 2 as it sees fit.
??????????????????
Thanks !