原创文章,转载请标明出处
http://space.itpub.net/26239116/viewspace-749048
安装时有些很多语句是从网上找的,发现网上一模一样的东西转来转去,都没写出处。我觉得我注明这些东西来自哪个链接,都是冒犯真正的原创者,这里我就只跟原创大哥说声谢谢吧。
下面记录了我安装的过程和一些觉得需要注意的地方,并且整理了一下格式和字体颜色
创建磁盘
"C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" createhd -filename XXXXXXXXXX.vdi -size 10240 -format VDI -variant Fixed
将磁盘设置为可共享
"C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" modifyhd ocr1.vdi --type shareable
将磁盘共享到虚拟机上
"C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" storageattach rac1 --storagectl "SATA 控制器" --port 1 --device 0 --type hdd --medium ocr1.vdi --mtype shareable
"C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" storageattach rac1 --storagectl "SATA 控制器" --port 2 --device 0 --type hdd --medium ocr2.vdi --mtype shareable
安装软件包
hosts配置
(下面的scan ip是一个虚拟ip,是客户机连接rac的ip。客户机的tns中,使用service_name和这个虚拟ip,就可以由rac自己提供负载均衡功能,不需要像10g那样在客户机配置负载均衡)
#public ip
192.168.50.101 rac1
192.168.50.102 rac2
#priv ip
192.168.60.101 rac1priv
192.168.60.102 rac2priv
#vip ip
192.168.50.111 rac1vip
192.168.50.112 rac2vip
#scan ip
192.168.50.215 racscan
192.168.50.101 rac1
192.168.50.102 rac2
#priv ip
192.168.60.101 rac1priv
192.168.60.102 rac2priv
#vip ip
192.168.50.111 rac1vip
192.168.50.112 rac2vip
#scan ip
192.168.50.215 racscan
创建用户
/usr/sbin/groupadd -g 501 oinstall
/usr/sbin/groupadd -g 502 dba
/usr/sbin/groupadd -g 503 oper
/usr/sbin/groupadd -g 504 asmadmin
/usr/sbin/groupadd -g 505 asmoper
/usr/sbin/groupadd -g 506 asmdba
/usr/sbin/useradd -g oinstall -G dba,asmdba,oper oracle
/usr/sbin/useradd -g oinstall -G asmadmin,asmdba,asmoper,oper,dba grid
/usr/sbin/groupadd -g 502 dba
/usr/sbin/groupadd -g 503 oper
/usr/sbin/groupadd -g 504 asmadmin
/usr/sbin/groupadd -g 505 asmoper
/usr/sbin/groupadd -g 506 asmdba
/usr/sbin/useradd -g oinstall -G dba,asmdba,oper oracle
/usr/sbin/useradd -g oinstall -G asmadmin,asmdba,asmoper,oper,dba grid
设置密码
echo grid:asdasd | chpasswd
echo oracle:asdasd | chpasswd
创建文件夹
mkdir -p /u01/app/oracle
mkdir -p /u01/app/11.2.0/grid
mkdir -p /u01/app/oracle/product/11.2.0/db_1
mkdir -p /u01/app/11.2.0/grid
mkdir -p /u01/app/oracle/product/11.2.0/db_1
chown -R grid:oinstall /u01/
chown -R oracle:oinstall /u01/app/oracle
修改系统参数(实体机安装需调大参数)
vi /etc/sysctl.conf
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 1054504960
kernel.shmmni = 4096
# semaphores: semmsl, semmns, semopm, semmni
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default=262144
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=1048586
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 1054504960
kernel.shmmni = 4096
# semaphores: semmsl, semmns, semopm, semmni
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default=262144
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=1048586
vi /etc/security/limits.conf
#ORACLE SETTING
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
vi /etc/pam.d/login
#ORACLE SETTING
session required pam_limits.so
#ORACLE SETTING
session required pam_limits.so
停止系统的时间同步协议
(如果使用系统自身的NTP时间同步,需要配置时间同步服务器。
如果使用oracle自带的时间服务CTSS,需要禁用系统的ntp,并且移除ntp.conf)
/sbin/service ntpd stop
chkconfig ntpd off
mv /etc/ntp.conf /etc/ntp.conf.bak
chkconfig ntpd off
mv /etc/ntp.conf /etc/ntp.conf.bak
如果/dev/shm小于1G,在fstab里设置/dev/shm大于1G
tmpfs /dev/shm tmpfs defaults,size=1G 0 0
设置用户的.bash_profile
grid:
alias df='df -h'
alias du='du -sh'
alias la='ls -lha'
ORACLE_SID=+ASM1; export ORACLE_SID
ORACLE_BASE=/u01/app/grid; export ORACLE_BASE
ORACLE_HOME=/u01/app/11.2.0/grid; export ORACLE_HOME
ORACLE_PATH=/u01/app/oracle/common/oracle/sql; export ORACLE_PATH
alias du='du -sh'
alias la='ls -lha'
ORACLE_SID=+ASM1; export ORACLE_SID
ORACLE_BASE=/u01/app/grid; export ORACLE_BASE
ORACLE_HOME=/u01/app/11.2.0/grid; export ORACLE_HOME
ORACLE_PATH=/u01/app/oracle/common/oracle/sql; export ORACLE_PATH
ORACLE_TERM=xterm; export ORACLE_TERM
ORA_NLS11=$ORACLE_HOME/nls/data; export ORA_NLS11
NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"; export NLS_DATE_FORMAT
TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN
THREADS_FLAG=native; export THREADS_FLAG
PATH=${PATH}:$HOME/bin:$ORACLE_HOME/binPATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
PATH=${PATH}:/u01/app/common/oracle/bin
export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
export LD_LIBRARY_PATH
export ORACLE_HOSTNAME=rac1
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
export LD_LIBRARY_PATH
export CVUQDISK_GRP=oinstall
export TEMP=/tmp
export TMPDIR=/tmp
umask 022
export TEMP=/tmp
export TMPDIR=/tmp
umask 022
oracle:
alias df='df -h'
alias du='du -sh'
alias la='ls -lha'
ORACLE_SID=rac1; export ORACLE_SID
ORACLE_HOSTNAME=rac1
ORACLE_UNQNAME=rac; export ORACLE_UNQNAME
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1; export ORACLE_HOME
ORACLE_PATH=/u01/app/common/oracle/sql; export ORACLE_PATH
ORACLE_TERM=xterm; export ORACLE_TERM
alias du='du -sh'
alias la='ls -lha'
ORACLE_SID=rac1; export ORACLE_SID
ORACLE_HOSTNAME=rac1
ORACLE_UNQNAME=rac; export ORACLE_UNQNAME
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1; export ORACLE_HOME
ORACLE_PATH=/u01/app/common/oracle/sql; export ORACLE_PATH
ORACLE_TERM=xterm; export ORACLE_TERM
ORA_NLS11=$ORACLE_HOME/nls/data; export ORA_NLS11
NLS_DATE_FORMAT="YYYY-MM-DD HH24:MI:SS"; export NLS_DATE_FORMATTNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN
THREADS_FLAG=native; export THREADS_FLAG
PATH=${PATH}:$HOME/bin:$ORACLE_HOME/binPATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
PATH=${PATH}:/u01/app/common/oracle/bin
export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
export LD_LIBRARY_PATH
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
export LD_LIBRARY_PATH
export CVUQDISK_GRP=oinstall
export TEMP=/tmp
export TMPDIR=/tmp
umask 022
export TEMP=/tmp
export TMPDIR=/tmp
umask 022
克隆虚拟机,或重新装一台
关机
执行
"C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" clonehd "D:\ls\vm\Virtual Machines\rac1\rac1.vdi" "D:\ls\vm\Virtual Machines\rac1\rac2.vdi"
然后新建虚拟机,使用这块磁盘,并进行网卡配置。
将共享磁盘挂到rac2上。
进入系统后再把网络和环境变量配置好。
安装ASM
rpm -ivh oracleasm-support-2.1.7-1.el5.x86_64.rpm
rpm -ivh oracleasm-2.6.18-164.el5-2.0.5-1.el5.x86_64.rpm
rpm -ivh oracleasmlib-2.0.4-1.el5.x86_64.rpm
给共享存储分区
...
配置ASM
在两个节点初始化
[root@rac1 ~]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting without typing an
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting without typing an
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]
在rac1上创建ASM磁盘
[root@rac2 ~]# /etc/init.d/oracleasm createdisk CRS1 /dev/sdb1
Marking disk "CRS1" as an ASM disk: [ OK ]
[root@rac2 ~]# /etc/init.d/oracleasm createdisk CRS2 /dev/sdc1
Marking disk "CRS2" as an ASM disk: [ OK ]
[root@rac2 ~]# /etc/init.d/oracleasm createdisk DATA1 /dev/sdd1
Marking disk "DATA1" as an ASM disk: [ OK ]
[root@rac2 ~]# /etc/init.d/oracleasm createdisk DATA2 /dev/sde1
Marking disk "DATA2" as an ASM disk: [ OK ]
[root@rac2 ~]# /etc/init.d/oracleasm createdisk RECOVERY /dev/sdf1
Marking disk "RECOVERY" as an ASM disk: [ OK ]
Marking disk "CRS1" as an ASM disk: [ OK ]
[root@rac2 ~]# /etc/init.d/oracleasm createdisk CRS2 /dev/sdc1
Marking disk "CRS2" as an ASM disk: [ OK ]
[root@rac2 ~]# /etc/init.d/oracleasm createdisk DATA1 /dev/sdd1
Marking disk "DATA1" as an ASM disk: [ OK ]
[root@rac2 ~]# /etc/init.d/oracleasm createdisk DATA2 /dev/sde1
Marking disk "DATA2" as an ASM disk: [ OK ]
[root@rac2 ~]# /etc/init.d/oracleasm createdisk RECOVERY /dev/sdf1
Marking disk "RECOVERY" as an ASM disk: [ OK ]
然后在rac2上同步一下
[root@rac1 ~]# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@rac1 ~]# /etc/init.d/oracleasm listdisks
CRS1
CRS2
DATA1
DATA2
RECOVERY
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@rac1 ~]# /etc/init.d/oracleasm listdisks
CRS1
CRS2
DATA1
DATA2
RECOVERY
安装cvuqdisk
到grid安装文件的文件夹下,进入rpm文件夹。
rpm -ivh cvuqdisk-1.0.7-1.rpm
用grid用户,在grid安装文件目录下直径检查安装环境的脚本(这个检查需要节点间的SSH等效性,此时可以手工配置,也可在grid安装过程中自动配置后再检查。
(
手工配置SSH等效性:
在两个节点:
mkdir ~/.sshchmod 700 ~/.sshssh-keygen -t rsassh-keygen -t dsa
在RAC1上执行:
然后在两个节点分别测试cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keyscat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keysssh rac2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keysThe authenticity of host 'rac2 (192.168.50.112)' can't be established.RSA key fingerprint is 76:2a:3a:c3:59:e0:d7:0b:9e:06:3f:50:6c:42:72:bb.Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added 'rac2,192.168.50.112' (RSA) to the list of known hosts.oracle@rac2's password:ssh rac2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keysoracle@rac2's password:scp ~/.ssh/authorized_keys rac2:~/.ssh/authorized_keysauthorized_keys 100% 1988 1.9KB/s 00:00
ssh rac1 date
ssh rac2 date
ssh rac1-priv date
ssh rac2-priv date
)
环境监察脚本:
安装文件目录/runcluvfy.sh stage -pre crsinst -n rac1,rac2 -fixup -verbose
执行grid安装目录下的runsInstaller
下面的scan name就是hosts里面的scan ip。
取消gns,gns依赖dns服务器和dhcp服务,是帮助客户机链接到scan的。因为我们在hosts里已经配置了scan,不需要这个了。
点击SSH Cnnectivity,设置grid用户密码。点击Setup,它会自己配置ssh等效性,配置好后,点击Test测试一下,然后下一步。
选择ASM方式
这里只设置CRS磁盘组
逐个检查缺失的包是否已经安装,然后忽略这些异常。
最后有两个验证失败,先忽略,后期可以配置。
用grid用户检查集群
[grid@rac1 ~]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
[grid@rac1 ~]$ crs_stat -t -v
Name Type R/RA F/FT Target State Host
----------------------------------------------------------------------
ora.CRS.dg ora....up.type 0/5 0/ ONLINE ONLINE rac1
ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE rac1
ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE rac1
ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE rac1
ora.eons ora.eons.type 0/3 0/ ONLINE ONLINE rac1
ora.gsd ora.gsd.type 0/5 0/ OFFLINE OFFLINE
ora....network ora....rk.type 0/5 0/ ONLINE ONLINE rac1
ora.oc4j ora.oc4j.type 0/5 0/0 OFFLINE OFFLINE
ora.ons ora.ons.type 1/3 0/ ONLINE ONLINE rac1
ora....SM1.asm application 0/5 0/0 ONLINE ONLINE rac1
ora....C1.lsnr application 0/5 0/0 ONLINE ONLINE rac1
ora.rac1.gsd application 0/5 0/0 OFFLINE OFFLINE
ora.rac1.ons application 1/3 0/0 ONLINE ONLINE rac1
ora.rac1.vip ora....t1.type 0/0 0/0 ONLINE ONLINE rac1
ora....SM2.asm application 0/5 0/0 ONLINE ONLINE rac2
ora....C2.lsnr application 0/5 0/0 ONLINE ONLINE rac2
ora.rac2.gsd application 0/5 0/0 OFFLINE OFFLINE
ora.rac2.ons application 0/3 0/0 ONLINE ONLINE rac2
ora.rac2.vip ora....t1.type 0/0 0/0 ONLINE ONLINE rac2
ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE rac1
Name Type R/RA F/FT Target State Host
----------------------------------------------------------------------
ora.CRS.dg ora....up.type 0/5 0/ ONLINE ONLINE rac1
ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE rac1
ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE rac1
ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE rac1
ora.eons ora.eons.type 0/3 0/ ONLINE ONLINE rac1
ora.gsd ora.gsd.type 0/5 0/ OFFLINE OFFLINE
ora....network ora....rk.type 0/5 0/ ONLINE ONLINE rac1
ora.oc4j ora.oc4j.type 0/5 0/0 OFFLINE OFFLINE
ora.ons ora.ons.type 1/3 0/ ONLINE ONLINE rac1
ora....SM1.asm application 0/5 0/0 ONLINE ONLINE rac1
ora....C1.lsnr application 0/5 0/0 ONLINE ONLINE rac1
ora.rac1.gsd application 0/5 0/0 OFFLINE OFFLINE
ora.rac1.ons application 1/3 0/0 ONLINE ONLINE rac1
ora.rac1.vip ora....t1.type 0/0 0/0 ONLINE ONLINE rac1
ora....SM2.asm application 0/5 0/0 ONLINE ONLINE rac2
ora....C2.lsnr application 0/5 0/0 ONLINE ONLINE rac2
ora.rac2.gsd application 0/5 0/0 OFFLINE OFFLINE
ora.rac2.ons application 0/3 0/0 ONLINE ONLINE rac2
ora.rac2.vip ora....t1.type 0/0 0/0 ONLINE ONLINE rac2
ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE rac1
[grid@rac1 ~]$ olsnodes -n
rac1 1
rac2 2
rac1 1
rac2 2
[grid@rac1 ~]$ srvctl status listener
Listener LISTENER is enabled
Listener LISTENER is running on node(s): rac1,rac2
Listener LISTENER is enabled
Listener LISTENER is running on node(s): rac1,rac2
[grid@rac1 ~]$ ps -ef | grep lsnr | grep -v grep | grep -v ocfs | awk '{print $9}'
LISTENER_SCAN1
LISTENER
[grid@rac1 ~]$ srvctl status asm -a
ASM is running on rac1,rac2
ASM is enabled.
LISTENER_SCAN1
LISTENER
[grid@rac1 ~]$ srvctl status asm -a
ASM is running on rac1,rac2
ASM is enabled.
[grid@rac1 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 2252
Available space (kbytes) : 259868
ID : 781100079
Device/File Name : +CRS
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 2252
Available space (kbytes) : 259868
ID : 781100079
Device/File Name : +CRS
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
[grid@rac1 ~]$ crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 828a975a3b254f53bf3b3e60b254defe (ORCL:CRS1) [CRS]
Located 1 voting disk(s).
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 828a975a3b254f53bf3b3e60b254defe (ORCL:CRS1) [CRS]
Located 1 voting disk(s).
在图形界面下创建数据文件磁盘组
运行asmca
用oracle用户安装数据库
最后再建库就可以了。
如果安装失败,想重新安装,需要注意两点:
1 不需要手工直接删目录,可以运行grid安装文件里的
deinstall脚本。
2 重新安装grid的时候,选择crs磁盘,会报告说这个磁盘已经有ocr之类的错误。可以直接用dd把磁盘开始一段清掉。
来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/26239116/viewspace-749048/,如需转载,请注明出处,否则将追究法律责任。
转载于:http://blog.itpub.net/26239116/viewspace-749048/