AIX下ORACLE RAC LOCK SGA

In the previous post we discussed how ORACLE allocates shared memory for SGA in AIX and one of the conclusions was that AIX does not give all the requested memory to ORACLE instance right away but merely promises it.

While this technique allows to use memory more efficiently and you (at least temporarily), can request more memory for processes and shared segments than what AIX physically has, it also has a rather unpleasant consequence – when we get to the limit of physical memory, AIX will have no choice but to start paging memory.

Paging is not necessarily a bad thing – moving older and not-so-often used data out of memory is something that will be done rather routinely – this is how AIX keeps a healthy system. However, when SGA memory starts to page out (and, more importantly, page back in) things can go bad quickly as, well, ORACLE does not really expect SGA to be a disk based area … (ORACLE would have called it “SDA” if that was the case ;-) )

You probably know that in the vast majority of configurations, it is strongly advised to size SGA so that it fits entirely into physical memory and never pages out. The question becomes: how can we accomplish that on AIX?

Pinning ORACLE SGA into AIX Memory

It turns out that there are several ways to pin ORACLE SGA into AIX memory, some of them ORACLE-driven, some AIX-driven and a combination of both …

First of all, let’s look at what ORACLE offers.

We will start by checking ORACLE sga-related parameters:

SQL >  SHOW parameter sga

NAME                                  TYPE         VALUE
------------------------------------ ----------- ------------------------------
lock_sga           BOOLEAN      FALSE
pre_page_sga       BOOLEAN      FALSE
sga_max_size      big  INTEGER 8G
sga_target        big  INTEGER 8G

The first two parameters (lock_sga and pre_page_sga) look promising and, in fact, they can be used to control how SGAmemory is allocated.

Let’s look at pre_page_sga first.

Controlling memory allocation with pre_page_sga

According to ORACLE documentation, when pre_page_sga is set to true “every process that starts must access every page in SGA“. Obviously, when this happens, the entire SGA memory is used and thus allocated.

Let’s see for ourselves. Before we even begin, let’s remind ourselves how memory is allocated when this parameter is NOTset.

AIX Default Sga Settings

As you can see, in the beginning, most of the memory is under allocated (AIX promised it but did not yet deliver) as not all of the memory has been used.

After setting pre_page_sga to TRUE and restarting the database the picture changes:

AIX Pre_Page_Sga True

Notice that all segments are allocated to the MAX - that is the result of instance processes reading and touching all the memory pages during startup. This obviously has a direct effect on the time it takes to start up the database – in my environment it took ~ 40 seconds (compared to ~ 12 seconds with default settings) for a 4Gb SGA. Presumably, however, this additional time has not been wasted – all further requests to SGA memory are supposed to hit real physical memory and AIX will not need to do any additional allocations.

Still, there are two problems with this approach:

  1. Notice that the memory, although fully allocated, is NOT really pinned. That means that if AIX starts experiencing memory shortages, you can bet that it will start paging SGA memory out with all the unpleasant consequences.
  2. A somewhat unexpected consequence is that it now takes more time for any ORACLE process to start as the“touching” it is not done just during instance startup – it is happening for any new ORACLE process (i.e. dedicated server). In my environment, average database connection time went from ~ 0.2 second to ~ 0.8 second, a 4 time increase.

Given these downsides, it is really hard to find a good justification for using pre_page_sga to “load ORACLE memory in memory”. I’m guessing this parameter is probably a relict of the past, or, perhaps, a way to pre-load memory for systems that do not support real memory pinning (remember that ORACLE can run on many operating systems). But in modern AIX,I just do not see how it can be effectively used.

So, let’s move on to the next parameter – lock_sga

Controlling memory allocation with lock_sga

When lock_sga is set to true, ORACLE (based on what truss output shows), runs this additional command on a (global – think ipcs) shared memory segment:

shmctl (... , SHM_LOCK , ... )

which pins shared memory region into physical memory.

Let’s see how it works. After setting lock_sga=true, and restarting the database, here is what I see:

AIX Lock_Sga True

Notice, that memory is not only allocated fully, but is also pinned and this is really what we want to achieve. The database startup still takes more time than without this parameter (on my system, ~ 34 seconds compared to ~ 12 seconds, again, for a 4Gb SGA), but normal database connections do not suffer any longer as, beyond startup, ORACLEprocesses do not need to do any (major) extra stuff.

One note here: Many ORACLE documentation sources recommend to also set v_pinshm AIX vmo parameter to enable memory pinning as in:

vmo  -p  -o v_pinshm =  1

However this is no longer required, unless you are dealing with a really old version of ORACLE.

With versions up to 9i, ORACLE used a different call for memory pinning:

shmget (IPC_PRIVATE , shm_size , IPC_CREAT |SHM_PIN )

which required that v_pinshm is also set. As I mentioned, in 10g and beyond ORACLE uses:

shmctl (shm_id , SHM_LOCK , ... )

that completely ignores v_pinshm settings (special thanks to Leszek Leszczynski for researching this in detail). In my tests,ORACLE 10g/11g memory was pinned regardless of the value of v_pinshm. You can of course, still set it if you need it forORACLE 9i or for other applications.

In any case, looks like setting lock_sga=true (and, v_pinshm=1, if needed) solves the problem of SGA pinning to our satisfaction – memory is pinned and everybody is happy.

But I would submit that for larger SGAs (and what SGA these days is NOT large? :-) ) there is an even better way to work with AIX memory and that is – using AIX large memory pages.

Using AIX large memory pages

Before discussing how to use large memory pages in AIX, let’s step back a little and discuss what exactly these pages are and what kind of pages we can allocate.

As I mentioned already, memory in AIX is controlled by a Virtual Memory Manager that divides (virtual) memory into chunks (or pages) of equal size. Traditionally, these chunks have always had a mandatory size of 4K, but recently, with the advent of machines that can handle large amounts of RAM, this started to change.

AIX on a modern hardware now supports 4 different page sizes: 4K, 64K, 16M and 16G and I outlined the difference between them in the table below:



Page size Svmon symbol Configuration Pageable How to configure How to use
4K s Traditional, Automatic YES N/A By default
64K m Automatic YES N/A By default
16M L Manual NO vmo -p -o lgpg_regions=2048 lgpg_size=16777216 chuser capabilities=CAP_BYPASS_RAC_VMM,
CAP_PROPAGATE
oracle
lock_sga=TRUE
16G L Manual NO vmo -p -o lgpg_regions=10 lgpg_size=17179869184 chuser capabilities=CAP_BYPASS_RAC_VMM,
CAP_PROPAGATE
oracle
lock_sga=TRUE

As you can see, 4K pages are still there and still default, but AIX has now also added new “default” 64K pages. Default in this context means that you do not need to do anything special to either enable these pages or use them – AIX will decide when to use 64K pages instead of 4K and this will be done completely transparently to programs (including ORACLE, of course). In fact, in modern hardware, you would most likely see 64K pages used by ORACLE SGA as (a large) SGA size will definitely warrant them.

There is also an interesting development with AIX 6.1. While AIX 5.3 can allocate 64K pages from the start, AIX 6.1 can take existing 4K memory regions and see if they can be “collapsed” from 4K to 64K pages. svmon will show “collapsed” regions as sm.

But back to memory pages. Beyond medium (64K) pages, AIX also allows to use even larger pages – 16M or 16G. However, there are two important differences here:

  1. Large pages are NOT available by default. They require extra steps to enable them and (separately) to use them
  2. Large pages are NOT pageable. Once allocated, they always stay in memory and cannot be paged in or out (which is probably a good thing, but you do need to pay special attention to how you size them).

In addition to that, not all AIX hardware will support larger pages. To see if your particular hardware supports them run:

AIX > pagesize  -a

4096
65536
16777216
17179869184

The one problem with large pages is that they are somewhat cumbersome to use.

First of all, you have to pre calculate the “large page memory” size and explicitly set it with the VMM (this will take memory away from regular VMM operations and designate it to “large page” region).

AIX > vmo  -p  -o  lgpg_regions= 2048  lgpg_size= 16777216
AIX > bosboot  -ad  /dev /hdisk0; bosboot  -ad  /dev /hdisk1; bosboot –ad  /dev /ipldevice

Personally, I do not see it as a major issue as you have to assign a specific size for your SGA anyway, albeit now you will have to do it on 2 levels: ORACLE and AIX.

Second, even then allocated, large pages cannot be used unless you allow user to skip regular VMM allocation policies with this command:

AIX > chuser  capabilities=CAP_BYPASS_RAC_VMM,CAP_PROPAGATE  $USER

which, again, in my mind is only a minor nuisance.

(Of course, there are also bugs … in particular, ORACLE 10.2.0.4 will not use AIX large pages even if all settings are made, unless one-off patch: 7226548 is applied … But I digress …)

Anyway, now we know what large pages are, but why exactly do we want to use them?

Well, for larger SGA sizes the benefits should be fairly obviously: making page sizes larger reduces the number of pages that AIX has to manage and that makes managing memory more efficient. Think about this: for a 30 Gb SGA (which is notexcessively big these days …), the number of pages is reduced from 7,864,320 (for 4K pages) or 491,520 (for 64K pages) to 1,920 if we switch to 16M pages and that is, indeed, quite a savings …

I.e. look at how this reduction affects database startup time (test results from one of my systems):

  • the 30 Gb SGA database started in ~ 6 seconds with default settings (but remember that memory is not really fully allocated)
  • lock_sga=TRUE with 64K pages changed that startup time to ~ 35 seconds
  • lock_sga=TRUE + large (16M) pages drove the startup time back to ~ 6 seconds

On top of that, once you set up large pages, you effectively shielded this memory from the rest of the system – it will not be paged out or affected by regular memory operations, which is, ultimately, what you want to achieve in most cases.

Finally, once allocated, how will “large page” memory be reported by svmon? Well, see for yourself:

AIX Large Pages

This would normally conclude AIX memory story, if not for one thing – ORACLE 11g made a major changes in this area, making SGA, in addition to PGA much more dynamic…

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值