11gR203RAConAix Install



+++++++++++++++++++++
环境确认
+++++++++++++++++++++


检查pv大小
===========
# getconf DISK_SIZE /dev/hdisk46
102400


getconf DISK_SIZE /dev/rhdisk2
getconf DISK_SIZE /dev/rhdisk3
getconf DISK_SIZE /dev/rhdisk4
getconf DISK_SIZE /dev/rhdisk5
getconf DISK_SIZE /dev/rhdisk6
getconf DISK_SIZE /dev/rhdisk7
getconf DISK_SIZE /dev/rhdisk8
getconf DISK_SIZE /dev/rhdisk9
getconf DISK_SIZE /dev/rhdisk10
getconf DISK_SIZE /dev/rhdisk11
getconf DISK_SIZE /dev/rhdisk12
getconf DISK_SIZE /dev/rhdisk13
getconf DISK_SIZE /dev/rhdisk14
getconf DISK_SIZE /dev/rhdisk15
getconf DISK_SIZE /dev/rhdisk16






o Hardware Requirements
· Physical memory (at least 1.5 gigabyte (GB) of RAM)
· An amount of swap space equal to the amount of RAM
· Temporary space (at least 1 GB) available in /tmp
You will need at least 4.5 GB of available disk space for the Grid home directory
at least 4 GB of available disk space for the Oracle Database home directory
o Network Hardware Requirements
at least two network interface cards

o IP Address Requirements
· One public IP address for each node
· One virtual IP address for each node
· Three single client access name (SCAN) addresses for the cluster


o VNC Client. x-windows








o oracle_sid
o db 字符集




AIX OS (32- and 64-bit stands for Oracle not OS)
===================================================
AIX 5L V5.3 TL 09 SP1 ("5300-09-01") or higher, 64 bit kernel (Part Number E10854-01)
AIX 6.1 TL 02 SP1 ("6100-02-01") or higher, 64-bit kernel
AIX 7.1 TL 00 SP1 ("7100-00-01") or higher, 64-bit kernel


oslevel
'genkex | grep 64' or 'genkex | grep call' 
lslpp -L | grep 64bit


AIX Disk Space
===================================================
6.40G Database 
1.55G Software 


AIX RAM
===================================================
Database: minimum 1GB, recommended 2GB 
Grid Infrastructure for standalone server: minimum 1.5GB (plus another 1GB if installing Database too), recommended 4GB


/usr/sbin/lsattr -HE -l sys0 -a realmem




AIX swap
===================================================
Between 1GB and 2GB then 1.5 times RAM 
Between 2GB and 16 GB then match RAM 
More than 16 GB then 16GB RAM


/usr/sbin/lsps -a


AIX tmp
===================================================
1GB


AIX JDK & JRE
===================================================
IBM JDK 1.6.0.00 (64 BIT)




AIX Patches/Packages 
===================================================
AIX 5.3 required packages: 
bos.adt.base 
bos.adt.lib 
bos.adt.libm 
bos.perf.libperfstat 5.3.9.0 or later 
bos.perf.perfstat 
bos.perf.proctools 
rsct.basic.rte (For RAC configurations only) 
rsct.compat.clients.rte (For RAC configurations only) 
xlC.aix50.rte:10.1.0.0 or later 
gpfs.base 3.2.1.8 or later (Only for RAC) 


lslpp -l bos.adt.base bos.adt.lib bos.adt.libm bos.perf.libperfstat 5.3.9.0 \
bos.perf.perfstat bos.perf.proctools rsct.basic.rte \
rsct.compat.clients.rte xlC.aix50.rte:10.1.0.0  gpfs.base 3.2.1.8 


AIX 6.1 required packages: 
bos.adt.base 
bos.adt.lib 
bos.adt.libm 
bos.perf.libperfstat 6.1.2.1 or later 
bos.perf.perfstat 
bos.perf.proctools 
rsct.basic.rte (For RAC configurations only) 
rsct.compat.clients.rte (For RAC configurations only) 
xlC.aix61.rte:10.1.0.0 or later 
gpfs.base 3.2.1.8 or later (Only for RAC) 


lslpp -l bos.adt.base bos.adt.lib bos.adt.libm bos.perf.libperfstat 6.1.2.1 \
bos.perf.perfstat bos.perf.proctools rsct.basic.rte \
rsct.compat.clients.rte xlC.aix61.rte:10.1.0.0 gpfs.base 3.2.1.8

AIX 7.1 required packages: 
bos.adt.base 
bos.adt.lib 
bos.adt.libm 
bos.perf.libperfstat 
bos.perf.perfstat 
bos.perf.proctools 
xlC.aix61.rte.10.1.0.0 or later 
xlC.rte.10.1.0.0 or later 
gpfs.base 3.3.0.11 or later (Only for RAC) 


lslpp -l bos.adt.base bos.adt.lib bos.adt.libm bos.perf.libperfstat bos.perf.perfstat \
bos.perf.proctools xlC.aix61.rte xlC.rte gpfs.base


Authorized Problem Analysis Reports (APARs) for AIX 5.3: 
IZ42940 
IZ49516 
IZ52331 
See Note:1379753.1 for other AIX 5.3 patches that may be required 


/usr/sbin/instfix -i -k "IZ42940 IZ49516 IZ52331"


APARs for AIX 6.1: 
IZ41855 
IZ51456 
IZ52319 
IZ97457 
IZ89165 
See Note:1264074.1 and Note:1379753.1 for other AIX 6.1 patches that may be required 




/usr/sbin/instfix -a | grep -i "SHLAP64"
/usr/sbin/instfix -a | grep -i "SOCKETPAIR"
/usr/sbin/instfix -a | grep -i "MULTICAST"
/usr/sbin/instfix -a | grep -i "retransmit"






IV04047: SHLAP64 UNABLE TO PROCESS ORACLE REQUEST LEADING TO KERNEL HANG
IV16603: SYSTEM CRASH DUE TO FREED SOCKET WHEN SOCKETPAIR() CALL USED


IV16250: MULTICAST UDP PACKETS NOT DELIVERED TO ALL LISTENERS IN WPAR ENV
IV20595 The first retransmit packet may be sent after 64 seconds, (Doc Number=5795)
IV09942: SYSTEM CRASH IN NETINFO_UNIXDOMNLIST
iv08797  SYSTEM CRASH
IZ75919: KRLOCK SERIALIATION ISSUE
IZ76433: KRLOCK SERIALIATION ISSUE APPLIES TO AIX 6100-06


IZ97166: CRASH IN NETINFO_UNIXDOMNLIST WHILE RUNNING NETSTAT APPLIES TO AIX 6100-06


IY83611: AIX CRASHES WHEN USING 16GB PAGES WITH ORACLE




/usr/sbin/instfix -i -k "IZ41855 IZ51456 IZ52319 IZ97457 IZ89165"




APARs for AIX 7.1: 
IZ87216 
IZ87564 
IZ89165 
IZ97035 
See Note:1264074.1 and Note:1379753.1 for other AIX 7.1 patches that may be required 


/usr/sbin/instfix -i -k "IZ87216 IZ87564 IZ89165 IZ97035"


lslpp -w | grep -i "software title" 
/usr/sbin/instfix -ik patch number




可替换patch
===================
** Patch IZ89165 **


6100-03 - use AIX APAR IZ89304
6100-04 - use AIX APAR IZ89302
6100-05 - use AIX APAR IZ89300
6100-06 - use AIX APAR IZ89514
7100-00 - use AIX APAR IZ89165


** Patch IZ97457 **


5300-11 - use AIX APAR IZ98424
5300-12 - use AIX APAR IZ98126
6100-04 - use AIX APAR IZ97605
6100-05 - use AIX APAR IZ97457
6100-06 - use AIX APAR IZ96155
7100-00 - use AIX APAR IZ97035








recommended APARs
====================
1、为USLA heap大量消耗bug需要:
AIX 6.1 TL-07 APAR IV09580
http://www-01.ibm.com/support/docview.wss?uid=isg1IV09580
click on "obtain the fix for this APAR;" choose 6100-07
AIX 7.1 TL-01 APAR IV09541
https://www-304.ibm.com/support/docview.wss?uid=isg1IV09541
click on "obtain the fix for this APAR;” choose 7100-01


2、Paging Space Growth May Occur Unexpectedly With 64K (medium) Pages Enabled 


Customers on AIX should monitor paging space closely, and:  consider applying: 
?For AIX 6.1, consider applying APAR IZ71987 which is already available.  Reference:  http://www-01.ibm.com/support/docview.wss?uid=isg1IZ71987
?For AIX 5.3, consider applying APAR IZ67445 (64K PAGING TAKING PLACE WHEN AVAILABLE SYSTEM RAM EXISTS) when the APAR becomes available. 


3、
INFO:
ld: 0706-010 The binder was killed by a signal: Segmentation fault
Check for binder messages or use local problem reporting procedures.

INFO: make: 1254-004 The error code from the last command is 254.

Stop.

INFO: make: 1254-004 The error code from the last command is 2.

Stop.

INFO: End output from spawned process.
INFO: ----------------------------------
INFO: Exception thrown from action: make
Exception Name: MakefileException
Exception String: Error in invoking target 'agent nmb nmo nmhs' of makefile '/u01/app/oracle/product/11.2.0/sysman/lib/ins_emagent.mk'.


IZ89304 for AIX 6.1 TL3
IZ89302 for AIX 6.1 TL4
IZ89300 for AIX 6.1 TL5
IZ88711 or IZ89514 for AIX 6.1 TL6 - check with IBM
IZ89165 for AIX 7.1 TL0 SP2




4、AIX: ORA-07445 [ksmpclrpga] OR Link/Relink/Make Fails With: ld: 0711-780 SEVERE ERROR: Symbol .ksmpfpva (entry 58964) in object libserver11.a[ksmp.o] [ID 1379753.1]
ftp://public.dhe.ibm.com/aix/efixes/
5.3 TL11 - iv10538
5.3 TL12 - iv11158
6.1 TL4 - iv11167
6.1 TL5 - iv10576
6.1 TL6 - iv10539
6.1 TL7 - iv09580
7.1 TL0 - unaffected
7.1 TL1 - iv09541







AIX Kernel Settings
===================================================
<<< only AIX5L
set AIXTHREAD_SCOPE=S in the environment: 
export AIXTHREAD_SCOPE=S 
NOTE: This is only necessary on AIX5L. A change was introduced in AIX 6.1 which means that the variable does not need to be set. 


# /usr/sbin/no -a | fgrep ephemeral 
tcp_ephemeral_low = 32768 
tcp_ephemeral_high = 65535 
udp_ephemeral_low = 32768 
udp_ephemeral_high = 65535 


In the preceding example, the TCP and UDP ephemeral ports are set to the default range (32768-65536). 


--增强
# /usr/sbin/no -p -o tcp_ephemeral_low=9000 -o tcp_ephemeral_high=65500 
# /usr/sbin/no -p -o udp_ephemeral_low=9000 -o udp_ephemeral_high=65500 






maxuprocs 16384 


'env' for LINK_CNTRL
 


+++++++++++++++++++++
预安装需求
+++++++++++++++++++++


配置hosts
==============
/etc/hosts
#Edit by bruce.song 2014.10.15 


#hostname
22.12.100.21 aixcluster01
22.12.100.27 aixcluster02


#oracle_vip
22.12.100.23 aixcluster01-vip
22.12.100.29 aixcluster02-vip


#oracle_private_ip1
172.16.3.1 aixcluster01-priv1
172.16.3.3 aixcluster02-priv1


#oracle_private_ip2
172.16.3.11 aixcluster01-priv2 
172.16.3.13 aixcluster02-priv2


#oracle_scan_ip
22.12.100.25 aixcluster01-scan




===============
User Accounts
===============


建立Oracle安装目录组:
mkgroup -'A' id='501' adms='root' oinstall


建立Oracle ASMDBA组:
mkgroup -'A' id='502' adms='root' asmadmin
mkgroup -'A' id='503' adms='root' asmdba
mkgroup -'A' id='504' adms='root' asmoper
mkgroup -'A' id='505' adms='root' dba


建立Grid安装用户grid:
mkuser id='501' pgrp='oinstall' groups='asmadmin,asmdba,asmoper' home='/home/grid' fsize=-1 cpu=-1 data=-1 rss=-1 stack=-1 stack_hard=-1 capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE grid
passwd grid
建立Oracle安装用户oracle:
mkuser id='502' pgrp='oinstall' groups='dba,asmdba' home='/home/oracle' fsize=-1 cpu=-1 data=-1 rss=-1 stack=-1 stack_hard=-1 capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE oracle
passwd oracle


注意:必须用新建立的用户重新登录,进行修改密码(密码可以和原来一样)




<create software dirs>
mkdir -p /u01/sw/db
mkdir -p /u01/sw/psu
mkdir -p /u01/sw/patch
chmod -R 777 /u01/sw


<Oracle inventory 目录>
mkdir -p /u01/app/oraInventory
chown -R grid:oinstall /u01/app/oraInventory
chmod -R 775 /u01/app/oraInventory


<Grid Infrastructure BASE 目录>
mkdir -p /u01/app/grid
chown -R grid:oinstall /u01/app/grid
chmod -R 775 /u01/app/grid


<Grid Infrastructure Home 目录>
mkdir -p /u01/11.2.0/grid
chown -R grid:oinstall /u01/11.2.0/grid
chmod -R 775 /u01/11.2.0/grid


<Oracle Base 目录>
mkdir -p /u01/app/oracle
mkdir -p /u01/app/oracle/cfgtoollogs
chown -R oracle:oinstall /u01/app/oracle
chmod -R 775 /u01/app/oracle


<Oracle Rdbms Home 目录>
mkdir -p /u01/app/oracle/product/11.2.0/db_1
chown -R oracle:oinstall /u01/app/oracle/product/11.2.0/db_1
chmod -R 775 /u01/app/oracle/product/11.2.0/db_1








Check the created users
===================================================
# id oracle 
uid=502(oracle) gid=501(oinstall) groups=503(asmdba),505(dba)
# id grid 
uid=501(grid) gid=501(oinstall) groups=502(asmadmin),503(asmdba),504(asmoper)




Check and Grant Privileges
===================================================
<<< 检查用户权限
# lsuser -a capabilities grid
grid capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE
<<< 如果没有显示正确则执行
# chuser capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE grid
# lsuser -a capabilities oracle 
oracle capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE
<<< 如果没有显示正确则执行
# chuser capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE oracle


Setup SSH user equivalency
===================================================
Use grid user on all nodes: rm -rf $HOME/.ssh
$GI_OUI/sshsetup/sshUserSetup.sh -user grid -hosts "aixcluster01 aixcluster02" -advanced -noPromptPassphrase
$OUI_HOME/sshsetup/sshUserSetup.sh -user oracle -hosts "aixcluster01 aixcluster02" -advanced -noPromptPassphrase


test that ssh
===================================================
su - grid
date;ssh aixcluster01 date
date;ssh aixcluster02 date
su - oracle
date;ssh aixcluster01 date
date;ssh aixcluster02 date


Configure the GRID User profile
===================================================
# su - grid 
$ echo $SHELL 
/usr/bin/ksh 
$ vi .profile 
umask 022
If the ORACLE_SID, ORACLE_HOME, or ORACLE_BASE environment variable are set in the file, then remove the appropriate lines from the file.
export GI_OUI=/stage/core/AIX/PPC-64/64bit/rdbms/11.2.0.1.0/grid/
cd $GI_OUI




===============
OS Configuration
===============
===================================================


AIO Checking
===================================================
检查:ioo -a | grep 
确保aio_maxreqs的值为65536
# ioo -o aio_maxreqs
 aio_maxreqs = 65536
 
change: chdev -l aio0 -a maxreqs= 65536
For AIX 5.3: 
# lsattr -El aio0 -a maxreqs


检测:ioo –o aio_maxreqs      (AIX6.1)
lsattr -El aio0 -a maxreqs  (AIX5.3)






VMM Parameter Checking
===================================================
Checking: 
vmo -L minperm% 


vmo -L maxperm%
 
vmo -L maxclient%


vmo -L lru_file_repage 


vmo -L strict_maxclient 


vmo -L strict_maxperm




Change: 
vmo -p -o minperm%=3 
vmo -p -o maxperm%=90 
vmo -p -o maxclient%=90 
vmo -p -o lru_file_repage=0 
vmo -p -o strict_maxclient=1 
vmo -p -o strict_maxperm=0


Modification to restricted tunable strict_maxperm (aix 5.3) 
===================================================
Setting strict_maxperm to 0 in nextboot file Setting strict_maxperm to 0 Warning: a restricted tunable has been modified
AIX 5.3: 
maxuproc change: 
lsattr -E -l sys0 -a ncargs
chdev -l sys0 -a ncargs=256 


lsattr -E -l sys0 -a maxuproc 
chdev -l sys0 -a maxuproc=16384


NTP Change(option)
===================================================
ps -ef |grep ntps
If it has no -x option do below steps:
· a. Open the /etc/rc.tcpip file, and locate the following line: start /usr/sbin/xntpd "$src_running"
· b. Change the line to the following: start /usr/sbin/xntpd "$src_running" "-x"
· c. Save the file.


Run rootpre.sh
===================================================
#./rootpre.sh
./rootpre.sh output will be logged in /tmp/rootpre.out_10-05-30.05:58:36
Saving the original files in /etc/ora_save_10-05-30.05:58:36....
Copying new kernel extension to /etc....
Loading the kernel extension from /etc


Network Preparation
===================================================
PARAMETER           RECOMMENDED VALUE
ipqmaxlen                                 512
rfc1323                                    1
sb_max                                    1500000
tcp_recvspace                                 65536
tcp_sendspace                                65536
udp_recvspace    1351680 这个值应该是udp_sendspace的10倍,但是必须小于sb_max
udp_sendspace    135168  这个值至少应该是4K+(db_block_size*db_multiblock_read_count)的大小


no -a | egrep "ipqmaxlen|rfc1323|sb_max|tcp_recvspace|tcp_sendspace|udp_recvspace|udp_sendspace"




修改命令:
no -r -o ipqmaxlen=521             
no -p -o rfc1323=1                   
no -p -o sb_max=1500000    
no -p -o tcp_recvspace=65536
no -p -o tcp_sendspace=65536
no -p -o udp_recvspace=1351680 
no -p -o udp_sendspace=13516






Client Preparation
===================================================
Here we are using VNC Client. You can also use other x-terminal software. Download packages vnc*.rpm.
Install the RPM on AIX5L? using the command: rpm -Uhv vnc*
Start it: vncserver -geometry 1024x800
Use root user: xhost + (remotehost)




===============
shared storage
===============
===================================================
#chown grid:asmadmin /dev/rhdisk4
#chmod 660 /dev/rhdisk4
# lsattr -E -l hdisk4 | grep reserve_
reserve_policy no_reserve
#chdev -l hdisk4 -a [ reserve_lock=no | reserve_policy=no_reserve ]
reserve_policy is for AIX storage, rreserve_lock is for EMC and other storage. You need to change the
reserve option on every storage device you will be using in ASM
# /usr/sbin/chdev -l hdisk4 -a pv=clear




o 检测所有节点用于数据库存储的磁盘设备名是否相同
oo 给想要使用的PV赋予PVID
chdev -l hdisk[6-37] -a pv=yes
oo 在其他节点检测具有相同PVID的磁盘是否磁盘设备名也相同
oo 如果所有节点相同,那么开始授权,如果不同,那么通过mknod命令创建别名设备或者删除设备重新刷新
o 改变用于数据库存储的PV的属主
chown grid:asmadmin /dev/rhdisk[6-37]
chown grid:asmadmin /dev/rhdisk2
chown grid:asmadmin /dev/rhdisk3
chown grid:asmadmin /dev/rhdisk4
chown grid:asmadmin /dev/rhdisk5
chown grid:asmadmin /dev/rhdisk6
chown grid:asmadmin /dev/rhdisk7
chown grid:asmadmin /dev/rhdisk8
chown grid:asmadmin /dev/rhdisk9
chown grid:asmadmin /dev/rhdisk10
chown grid:asmadmin /dev/rhdisk11
chown grid:asmadmin /dev/rhdisk12
chown grid:asmadmin /dev/rhdisk13
chown grid:asmadmin /dev/rhdisk14
chown grid:asmadmin /dev/rhdisk15
chown grid:asmadmin /dev/rhdisk16
chown grid:asmadmin /dev/rhdisk17


o 改变用于数据库存储的PV的权限
chmod 660 /dev/rhdisk[3-17]
chmod 660 /dev/rhdisk2
chmod 660 /dev/rhdisk3
chmod 660 /dev/rhdisk4
chmod 660 /dev/rhdisk5
chmod 660 /dev/rhdisk6
chmod 660 /dev/rhdisk7
chmod 660 /dev/rhdisk8
chmod 660 /dev/rhdisk9
chmod 660 /dev/rhdisk10
chmod 660 /dev/rhdisk11
chmod 660 /dev/rhdisk12
chmod 660 /dev/rhdisk13
chmod 660 /dev/rhdisk14
chmod 660 /dev/rhdisk15
chmod 660 /dev/rhdisk16
chmod 660 /dev/rhdisk17


o 检测和设置保持策略
oo 检测
lsattr -E -l hdisk[6-7] | grep reserve_
lsattr -E -l hdisk2 | grep reserve_
lsattr -E -l hdisk3 | grep reserve_
lsattr -E -l hdisk4 | grep reserve_
lsattr -E -l hdisk5 | grep reserve_
lsattr -E -l hdisk6 | grep reserve_
lsattr -E -l hdisk7 | grep reserve_
lsattr -E -l hdisk8 | grep reserve_
lsattr -E -l hdisk9 | grep reserve_
lsattr -E -l hdisk10 | grep reserve_
lsattr -E -l hdisk11 | grep reserve_
lsattr -E -l hdisk12 | grep reserve_
lsattr -E -l hdisk13 | grep reserve_
lsattr -E -l hdisk14 | grep reserve_
lsattr -E -l hdisk15 | grep reserve_
lsattr -E -l hdisk16 | grep reserve_
lsattr -E -l hdisk17 | grep reserve_




oo 设置保持策略:
<<< 如果是reserve_policy
chdev -l hdisk[6-37] -a reserve_policy=no_reserve 
chdev -l  hdisk2  -a reserve_policy=no_reserve 
chdev -l  hdisk3  -a reserve_policy=no_reserve 
chdev -l  hdisk4  -a reserve_policy=no_reserve 
chdev -l  hdisk5  -a reserve_policy=no_reserve 
chdev -l  hdisk6  -a reserve_policy=no_reserve 
chdev -l  hdisk7  -a reserve_policy=no_reserve 
chdev -l  hdisk8  -a reserve_policy=no_reserve 
chdev -l  hdisk9  -a reserve_policy=no_reserve 
chdev -l  hdisk10  -a reserve_policy=no_reserve 
chdev -l  hdisk11  -a reserve_policy=no_reserve 
chdev -l  hdisk12  -a reserve_policy=no_reserve 
chdev -l  hdisk13  -a reserve_policy=no_reserve 
chdev -l  hdisk14  -a reserve_policy=no_reserve 
chdev -l  hdisk15  -a reserve_policy=no_reserve 
chdev -l  hdisk16  -a reserve_policy=no_reserve 
chdev -l  hdisk17  -a reserve_policy=no_reserve 


<<< 如果是reserve_lock
chdev -l hdisk[6-37] -a reserve_lock=no 
o 清除PVID
oo /usr/sbin/chdev -l hdisk[6-37] -a pv=clear




配置环境变量
=============
grid用户:
在两台服务器的的grid用户的.profile最后,增加如下内容:


oracle:
-----------------------
umask 022
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH
export PS1=`hostname`:'$PWD'"$"




grid
-----------------------
umask 022
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/11.2.0/grid
export NLS_LANG=AMERICAN_AMERICA.AL32UTF8
export NLS_DATE_FORMAT="yyyy-mm-dd hh24:mi:ss"
export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH
export PS1=`hostname`:'$PWD'"$"




 




+++++++++++++++++++++
安装过程
+++++++++++++++++++++
Oracle Grid Infrastructure Install
===================================================
CLUSTER1
OCR_VOTE
sys/oracle123


osdba : asmdba 
osoper: asmoper
osasm : asmadmin


rm -rf /tmp/.oracle/
root.sh


# $ORACLE_BASE/oraInventory/orainstRoot.sh
Changing permissions of /haclu/app/11.2.0/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /haclu/app/11.2.0/oraInventory to oinstall.
The execution of the script is complete.
# ./root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /haclu/app/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2010-05-29 23:58:14: Parsing the host name
2010-05-29 23:58:14: Checking for super user privileges
2010-05-29 23:58:14: User has super user privileges
Using configuration parameter file: /haclu/app/11.2.0/grid/crs/install/crsconfig_params
User grid has the required capabilities to run CSSD in realtime mode
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'system'..
Operation successful.
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
CRS-2672: Attempting to start 'ora.gipcd' on 'node1'
CRS-2672: Attempting to start 'ora.mdnsd' on 'node1'
CRS-2676: Start of 'ora.gipcd' on 'node1' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'node1'
CRS-2676: Start of 'ora.gpnpd' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'node1'
CRS-2676: Start of 'ora.cssdmonitor' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'node1'
CRS-2672: Attempting to start 'ora.diskmon' on 'node1'
CRS-2676: Start of 'ora.diskmon' on 'node1' succeeded
CRS-2676: Start of 'ora.cssd' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'node1'
CRS-2676: Start of 'ora.ctssd' on 'node1' succeeded
ASM created and started successfully.
Diskgroup DATA created successfully.
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'system'..
Operation successful.
CRS-2672: Attempting to start 'ora.crsd' on 'node1'
CRS-2676: Start of 'ora.crsd' on 'node1' succeeded
CRS-4256: Updating the profile
Successful addition of voting disk 9fe9435cb5de4fb3bfb90bf463221f14.
Successfully replaced voting disk group with +DATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 9fe9435cb5de4fb3bfb90bf463221f14 (/dev/rhdisk3) [DATA]
Located 1 voting disk(s).
CRS-2673: Attempting to stop 'ora.crsd' on 'node1'
CRS-2677: Stop of 'ora.crsd' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'node1'
CRS-2677: Stop of 'ora.asm' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'node1'
CRS-2677: Stop of 'ora.ctssd' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'node1'
CRS-2677: Stop of 'ora.cssdmonitor' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'node1'
CRS-2677: Stop of 'ora.cssd' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'node1'
CRS-2677: Stop of 'ora.gpnpd' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'node1'
CRS-2677: Stop of 'ora.gipcd' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'node1'
CRS-2677: Stop of 'ora.mdnsd' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.mdnsd' on 'node1'
CRS-2676: Start of 'ora.mdnsd' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'node1a?? succeeded




Grid Infrastructure Home Patching
===================================================
RDBMS Software Install
===================================================
su - oracle
software only
osdba : dba
osoper: oper
root.sh should be run on one node at a time.


# ./root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /haclu/app/11.2.0/product/11.2.0/dbhome_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
Finished product-specific root actions.


RDBMS Home Patching
===================================================
Run ASMCA to create diskgroups
===================================================
Run DBCA to create the database
===================================================
===================================================
Critical Patch Updates (CPUs) or Patch Set Update (PSU)  Patch 6880880 
===================================================
opatch
======================
unzip /u01/sw/patch/p6880880_112000_AIX64-5L.zip -d $ORACLE_HOME
$ORACLE_HOME/OPatch/opatch version




auto patch
======================
As the Grid home owner execute
1、/home/grid/ocm.rsp
$ORACLE_HOME/OPatch/ocm/bin/emocmrsp -output /home/grid/ocm.rsp


usually the user root
2、To patch GI home and all Oracle RAC database homes of the same version
opatch auto /u01/sw/psu -ocmrf /home/grid/ocm.rsp     <<< restart crs
opatch auto /u01/sw/psu -oh /u01/app/oracle/product/11.2.0/db_1 -ocmrf /home/grid/ocm.rsp   


<<</u01/11.2.0/grid/crs/install/crsconfig_params    <<< just only patch grid






manual patch
======================
1、Stop the CRS managed resources running from DB homes.


If this is a GI Home environment, as the database home owner execute:

$ <ORACLE_HOME>/bin/srvctl stop home -o <ORACLE_HOME> -s <status file location> -n <node name>
If this is an Oracle Restart Home environment, as the database home owner execute:

$ <ORACLE_HOME>/bin/srvctl stop home -o <ORACLE_HOME> -s <status file location>

Note:

You need to make sure that the Oracle ACFS file systems are unmounted (see Section 2.8) and all other Oracle processes are shutdown before you proceed. 




2、Run the pre root script.


If this is a GI Home, as the root user execute:

# /u01/11.2.0/grid/crs/install/rootcrs.pl -unlock






3、Apply the CRS patch using.


As the GI home owner execute:

$ORACLE_HOME/OPatch/opatch napply -oh $ORACLE_HOME -local /u01/sw/psu/17592127
As the GI home owner execute:

$ORACLE_HOME/OPatch/opatch apply -oh $ORACLE_HOME -local /u01/sw/psu/18522512


4、Run the pre script for DB component of the patch.


As the database home owner execute:

$ /u01/sw/psu/17592127/custom/server/17592127/custom/scripts/prepatch.sh -dbhome $ORACLE_HOME


5、Apply the DB patch.


As the database home owner execute:

$ORACLE_HOME/OPatch/opatch napply -oh $ORACLE_HOME -local /u01/sw/psu/17592127/custom/server/17592127
$ORACLE_HOME/OPatch/opatch apply -oh $ORACLE_HOME -local /u01/sw/psu/18522512


6、Run the post script for DB component of the patch.


As the database home owner execute:

$ /u01/sw/psu/17592127/custom/server/17592127/custom/scripts/postpatch.sh -dbhome $ORACLE_HOME


7、Run the post script.


As the root user execute:

# /u01/11.2.0/grid/rdbms/install/rootadd_rdbms.sh
If this is a GI Home, as the root user execute:

# /u01/11.2.0/grid/crs/install/rootcrs.pl -patch



select action,comments from registry$history;
bug 13342249


--For each database instance running on the Oracle home being patched, connect to the database using SQL*Plus. Connect as SYSDBA and run the catbundle.sql script as follows:




cd $ORACLE_HOME/rdbms/admin
sqlplus /nolog
SQL> CONNECT / AS SYSDBA
SQL> STARTUP
SQL> @catbundle.sql psu apply
SQL> select * from registry$history;

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值