添加节点
目前拥有一套两节点RAC
:
node1
为up1
node2
为up2
添加的node3
的命令基本上都在up1
主机上
oracle 11g
双节点rac
添加1个节点
版本:11204
操作系统 : linux redhat6.8
64位
思路:
规划: 主机名 IP
地址 存储等
操作:
一.系统配置:
1.关闭防火墙和selinux
2.配置好主机IP地址和主机名
3.关闭NTP服务(使用oracle的NTP服务,必须禁用系统的NTP服务)
4.配置dns服务(1节点是dns server)
5.建立安装用户,配置相应目录,给予权限,配置环境变量
5.1 grid用户
5.2 oracle用户
6.配置系统参数(3节点做)
限制参数
内核参数
7.安装软件包(DNS已经配置好yum)(3节点做)
8.使用udev 配置共享磁盘
9.配置互信
配置grid用户的互信
配置oracle用户的互信
10.安装指定lib包
二.up3
节点安装grid
(集群软件)
11.检查up3是否满足rac安装条件(在已经有节点下面用grid,oracle用户执行)
12.添加新节点的软件
13.验证集群软件是否添加成功
三.up3
节点安装数据库软件
14.节点3安装oracle软件
15.节点3添加实例
四.校验
服务器:linux 6.8 64
oracle&grid: 11.2.0.4
主机名 IP
地址:
up1.node.com : 192.168.1.130/24 172.16.1.131/24
up2.node.com : 192.168.1.140/24 172.16.1.141/24
rac
规划:
#node1
192.168.1.130 up1.node.com up1
172.16.1.131 up1priv.node.com up1priv
192.168.1.82 up1vip.node.com up1vip
#node2
192.168.1.140 up2.node.com up2
172.16.1.141 up2priv.node.com up2priv
192.168.1.92 up2vip.node.com up2vip
#node3
192.168.1.150 up3.node.com up3
172.16.1.151 up3priv.node.com up3priv
192.168.1.102 up3vip.node.com up3vip
#scanip
192.168.1.71 scanip.node.com scanip
192.168.1.72 scanip.node.com scanip
192.168.1.73 scanip.node.com scanip
注意:操作前,确定rac
的oracle
是正常运行状态。注意监听要启动。
操作:
一.系统配置:
节点3
1.关闭防火墙和selinux
chkconfig NetworkManager off
chkconfig iptables off
/etc/init.d/iptables stop
chkconfig --list | grep iptables
vim /etc/selinux/config
SELINUX=disabled
SELINUXTYPE=targeted
2.配置好主机IP
地址和主机名
节点3:
vim /etc/sysconfig/network-scripts/ifcfg-eth0
vim /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
HWADDR=08:00:27:67:a8:42
NM_CONTROLLED=yes
ONBOOT=yes
IPADDR=172.16.1.151
BOOTPROTO=none
NETMASK=255.255.255.0
TYPE=Ethernet
IPV6INIT=no
USERCTL=no
vim /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=up3.node.com
3.关闭NTP
服务(使用oracle
的NTP
服务,必须禁用系统的NTP
服务)
service ntpd stop
chkconfig ntpd off
/etc/init.d/ntpd status
4.配置dns
服务(1
节点是dns server
)
–让虚拟机挂载光盘,为了安装所需软件包(yum
)
配置DNS
:
4.1.节点3
需要安装dns
包
mkdir /source
mount /dev/cdrom /source/
vim /etc/yum.repos.d/x.repo
[ok]
name=ok
baseurl=file:///source/
gpgcheck=0
enabled=1
yum clean all
yum list
yum install bind* -y
4.2.主节点配置:
[root@up1 ~]# cd /var/named/chroot/etc/
[root@up1 etc]# pwd
/var/named/chroot/etc
[root@up1 etc]# cat named.conf
options {
directory "/dba";
};
zone "node.com" in {
type master;
file "node.com.zone";
};
zone "1.168.192.in-addr.arpa" in {
type master;
file "192.168.1.zone";
};
zone "0.0.127.in-addr.arpa" in {
type master;
file "127.0.0.zone";
};
[root@up1 etc]# cd /var/named/chroot/
[root@up1 chroot]# mkdir dba
[root@up1 chroot]# cd dba
[root@up1 dba]# ls
127.0.0.zone 192.168.1.zone node.com.zone
[root@up1 dba]# vim node.com.zone
$TTL 86400
@ IN SOA up1.node.com. root. (
2017040302 ; serial (d. adams)
3H ; refresh
15M ; retry
1W ; expiry
1D ) ; minimum
IN NS up1.node.com.
IN NS up2.node.com.
IN NS up3.node.com.
up1 IN A 192.168.1.130
up2 IN A 192.168.1.140
up3 IN A 192.168.1.150
up1vip IN A 192.168.1.82
up2vip IN A 192.168.1.92
up3vip IN A 192.168.1.102
scanip IN A 192.168.1.71
scanip IN A 192.168.1.72
scanip IN A 192.168.1.73
[root@up1 dba]# vim 192.168.1.zone
$TTL 86400
@ IN SOA up1.node.com. root.node.com. (
2017040302 ; serial (d. adams)
3H ; refresh
15M ; retry
1W ; expiry
1D ) ; minimum
IN NS up1.node.com.
IN NS up2.node.com.
IN NS up3.node.com.
130 IN PTR up1.node.com.
140 IN PTR up2.node.com.
150 IN PTR up3.node.com.
82 IN PTR up1vip.node.com.
92 IN PTR up2vip.node.com.
102 IN PTR up3vip.node.com.
71 IN PTR scanip.node.com.
72 IN PTR scanip.node.com.
73 IN PTR scanip.node.com.
[root@up1 dba]# vim 127.0.0.zone
$TTL 86400
@ IN SOA up1.node.com. root.node.com. (
2017040302 ; serial (d. adams)
3H ; refresh
15M ; retry
1W ; expiry
1D ) ; minimum
IN NS up1.node.com.
IN NS up2.node.com.
IN NS up3.node.com.
1 IN PTR localhost.node.com.
[root@up1 dba]# ls -l
total 12
-rwxrwx--- 1 root named 529 Apr 3 23:42 127.0.0.zone
-rwxrwx--- 1 root named 794 Apr 3 23:41 192.168.1.zone
-rwxrwx--- 1 root named 790 Apr 3 23:41 node.com.zone
[root@up1 dba]# vim /etc/resolv.conf
search node.com
nameserver192.168.1.130
domain node.com
4.3.1.新节点up3
配置:
[root@up3 etc]# cd /var/named/chroot/etc
[root@up3 etc]# vim named.conf
options {
directory "/dba1";
};
zone "node.com" in {
type slave;
file "node.com.zone";
masters {192.168.1.130;};
};
zone "1.168.192.in-addr.arpa" in {
type slave;
file "192.168.1.zone";
masters{192.168.1.130;};
};
zone "0.0.127.in-addr.arpa" in {
type slave;
file "127.0.0.zone";
masters{192.168.1.130;};
};
[root@up3 chroot]# cd /var/named/chroot
[root@up3 chroot]# mkdir dba1
[root@up3 chroot]# chown root:named dba1
[root@up3 chroot]# chmod 775 dba1
[root@up3 chroot]# cd dba1
[root@up3 dba1]# vim /etc/resolv.conf
search node.com
nameserver 192.168.1.130
domain node.com
4.3.2.从节点up2
配置:
[root@up3 etc]# cd /var/named/chroot/dba1
[root@up3 dba1]# rm -rf *
4.4.主从3
个节点按顺序都启动dns
服务
主:
/etc/init.d/named restart
chkconfig named on
从up2
:
/etc/init.d/named restart
chkconfig named on
从up3
:
/etc/init.d/named restart
chkconfig named on
4.5.主从3个节点按顺序测试:
主从:
ping up1
ping up2
ping up3
ping up1.node.com
ping up2.node.com
ping up3.node.com
nslookup up1vip.node.com
nslookup up2vip.node.com
nslookup up3vip.node.com
nslookup up1vip
nslookup up2vip
nslookup up3vip
nslookup scanip
5.建立安装用户,配置相应目录,给予权限,配置环境变量
节点3:
5.1 grid
用户
groupadd -g 1000 oinstall
groupadd -g 1200 asmadmin
groupadd -g 1201 asmdba
groupadd -g 1202 asmoper
useradd -u 1100 -g oinstall -G asmadmin,asmdba,asmoper -d /home/grid -s /bin/bash -c "grid Infrastructure Owner" grid
echo "grid" | passwd --stdin grid
[root@up3 src]# su - grid
由于我们先做的rac
升级的试验,所以环境变量发生了变化,我们这里使用第二个环境变量。(如果先做的添加节点的试验,则使用第一个环境变量)
[grid@up3 ~]$ vim .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_SID=+ASM3
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/11.2.0/grid
export ORACLE_TERM=xterm
export NLS_DATA_FORMAT='yyyy-mm-dd hh24:mi:ss'
export TNS_ADMIN=$ORACLE_HOME/network/admin
export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
alias sqlplus='/usr/local/bin/rlwrap sqlplus'
-----------------------------------------------------------------------------------------------------------------------------------
第二个
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_SID=+ASM3
export ORACLE_BASE=/u01/11204/app/grid
export ORACLE_HOME=/u01/11204/app/11.2.0/grid
export ORACLE_TERM=xterm
export NLS_DATA_FORMAT='yyyy-mm-dd hh24:mi:ss'
export TNS_ADMIN=$ORACLE_HOME/network/admin
export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
alias sqlplus='/usr/local/bin/rlwrap sqlplus'
5.2 oracle
用户
root
执行:
groupadd -g 1300 dba
groupadd -g 1301 oper
useradd -u 1101 -g oinstall -G dba,oper,asmdba -d /home/oracle -s /bin/bash -c "Oracle Software Owner" oracle
echo "oracle" | passwd --stdin oracle
[root@up3 src]# su - oracle
由于我们先做的rac
升级的试验,所以环境变量发生了变化,我们这里使用第二个环境变量。(如果先做的添加节点的试验,则使用第一个环境变量)
[oracle@up3 ~]$ vim .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_SID=racdb3
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1
export ORACLE_UNQNAME=racdb
export TNS_ADMIN=$ORACLE_HOME/network/admin
export ORACLE_TERM=xterm
export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:/u01/app/11.2.0/grid/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
export NLS_DATA_FORMAT='yyyy-mm-dd hh24:mi:ss'
alias sqlplus='/usr/local/bin/rlwrap sqlplus'
alias rman='usr/local/bin/rlwrap rman'
umask 022
--------------------------------------------------------------------------------------------------------------------------------
第二个
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_SID=racdb3
export ORACLE_BASE=/u01/11204/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1
export ORACLE_UNQNAME=racdb
export TNS_ADMIN=$ORACLE_HOME/network/admin
export ORACLE_TERM=xterm
export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:/u01/11204/app/11.2.0/grid/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
export NLS_DATA_FORMAT='yyyy-mm-dd hh24:mi:ss'
alias sqlplus='/usr/local/bin/rlwrap sqlplus'
alias rman='usr/local/bin/rlwrap rman'
umask 022
这里需要注意,在rac
升级试验之后做的rac
添加节点的试验,但发现成功添加节点之后,老路径的一些配置文件依旧传到了node3
节点。
个人建议第一个部分,和第二个部分都建立。
第一部分
root
执行:
mkdir -p /u01/app/grid
mkdir -p /u01/app/11.2.0/grid
mkdir -p /u01/app/oracle
mkdir -p /u01/app/oracle/product/11.2.0/db_1
chown -R oracle:oinstall /u01/
chown -R grid:oinstall /u01/app/grid
chown -R grid:oinstall /u01/app/11.2.0
chown -R oracle:oinstall /u01/app/oracle
chmod -R 775 /u01
第二部分
mkdir -p /u01/11204/app/grid
mkdir -p /u01/11204/app/11.2.0/grid
mkdir -p /u01/11204/app/oracle
mkdir -p /u01/11204/app/oracle/product/11.2.0/db_1
chown -R oracle:oinstall /u01/11204/
chown -R grid:oinstall /u01/11204/app/grid
chown -R grid:oinstall /u01/11204/app/11.2.0
chown -R oracle:oinstall /u01/11204/app/oracle
chmod -R 775 /u01
6.配置系统参数
节点3
:
限制参数
echo "oracle soft nproc 2047" >>/etc/security/limits.conf
echo "oracle hard nproc 16384" >>/etc/security/limits.conf
echo "oracle soft nofile 1024" >>/etc/security/limits.conf
echo "oracle hard nofile 65536" >>/etc/security/limits.conf
echo "grid soft nproc 2047" >>/etc/security/limits.conf
echo "grid hard nproc 16384" >>/etc/security/limits.conf
echo "grid soft nofile 1024" >>/etc/security/limits.conf
echo "grid hard nofile 65536" >>/etc/security/limits.conf
echo "session required /lib/security/pam_limits.so" >>/etc/pam.d/login
echo "session required pam_limits.so" >>/etc/pam.d/login
内核参数
echo "fs.aio-max-nr = 1048576" >> /etc/sysctl.conf
echo "fs.file-max = 6815744" >> /etc/sysctl.conf
echo "kernel.shmall = 2097152" >> /etc/sysctl.conf
echo "kernel.shmmax = 1054472192" >> /etc/sysctl.conf
echo "kernel.shmmni = 4096" >> /etc/sysctl.conf
echo "kernel.sem = 250 32000 100 128" >> /etc/sysctl.conf
echo "net.ipv4.ip_local_port_range = 9000 65500" >> /etc/sysctl.conf
echo "net.core.rmem_default = 262144" >> /etc/sysctl.conf
echo "net.core.rmem_max = 4194304" >> /etc/sysctl.conf
echo "net.core.wmem_default = 262144" >> /etc/sysctl.conf
echo "net.core.wmem_max = 1048586" >> /etc/sysctl.conf
echo "net.ipv4.tcp_wmem = 262144 262144 262144" >> /etc/sysctl.conf
echo "net.ipv4.tcp_rmem = 4194304 4194304 4194304" >> /etc/sysctl.conf
sysctl -p
7.安装软件包(DNS
已经配置好yum
)
节点3
:
yum install -y readline* binutils compat-libstdc++ compat-libstdc++ elfutils-libelf elfutils-libelf-devel expat gcc gcc-c++ glibc glibc glibc-common glibc-devel glibc-headers libaio libaio libaio-devel libaio-devel libgcc libgcc libstdc++ libstdc++ libstdc++-devel make pdksh sysstat unixODBC unixODBC unixODBC-devel unixODBC-devel binutils libaio-devel libaio elfutils-libelf-devel compat-libstdc++-33 libgcc gcc gcc-c++ glibc sysstat libstdc++ libstdc++-devel unixODBC-devel unixODBC
节点3
重启一次:(让修改的selinux
设置生效,添加共享磁盘,光驱删除盘片)
init 0
- 使用
udev
配置共享磁盘
8.1、3
节点配置iscsi
yum install iscsi* -y
/etc/init.d/iscsid start
iscsiadm -m discovery -t sendtargets -p 172.16.11.250:3260
/etc/init.d/iscsi start
chkconfig iscsi on
chkconfig iscsid on
[root@up3 ~]# fdisk -l
8.2、对比3
个节点的共享磁盘的UUID
,确定全部完全一致(/dev/sd
可以不一样):
节点1:
[root@up1 ~]# /sbin/scsi_id -g -u /dev/sdb
1ATA_VBOX_HARDDISK_VB6116258c-7210e337
[root@up1 ~]# /sbin/scsi_id -g -u /dev/sdc
1ATA_VBOX_HARDDISK_VB02cf4964-aad254ab
[root@up1 ~]# /sbin/scsi_id -g -u /dev/sdd
1ATA_VBOX_HARDDISK_VBb6362300-4cd51593
节点2:
[root@up2 ~]# /sbin/scsi_id -g -u /dev/sdb
1ATA_VBOX_HARDDISK_VB6116258c-7210e337
[root@up2 ~]# /sbin/scsi_id -g -u /dev/sdc
1ATA_VBOX_HARDDISK_VB02cf4964-aad254ab
[root@up2 ~]# /sbin/scsi_id -g -u /dev/sdd
1ATA_VBOX_HARDDISK_VBb6362300-4cd51593
节点3:
[root@up3 ~]# /sbin/scsi_id -g -u /dev/sdb
1ATA_VBOX_HARDDISK_VB6116258c-7210e337
[root@up3 ~]# /sbin/scsi_id -g -u /dev/sdc
1ATA_VBOX_HARDDISK_VB02cf4964-aad254ab
[root@up3 ~]# /sbin/scsi_id -g -u /dev/sdd
1ATA_VBOX_HARDDISK_VBb6362300-4cd51593
8.3、udev
绑定磁盘
节点1要:
scp /etc/udev/rules.d/99-oracle-asmdisk.rules up3:/etc/udev/rules.d/99-oracle-asmdisk.rules
3个节点都要:
[root@rac1 /]# cat /etc/udev/rules.d/99-oracle-asmdisk.rules
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB6116258c-7210e337", NAME="asm-diskb", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB02cf4964-aad254ab", NAME="asm-diskc", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VBb6362300-4cd51593", NAME="asm-diskd", OWNER="grid", GROUP="asmadmin", MODE="0660"
3节点执行:
/sbin/start_udev
ll /dev/asm*
brw-rw---- 1 grid asmadmin 8, 16 Apr 16 02:41 /dev/asm-diskb
brw-rw---- 1 grid asmadmin 8, 32 Apr 16 02:41 /dev/asm-diskc
brw-rw---- 1 grid asmadmin 8, 48 Apr 16 02:41 /dev/asm-diskd
3节点添加自启动udev
:
vi /etc/rc.d/rc.local
加入:
/sbin/start_udev
9.配置互信
配置grid
用户的互信
grid
用户
[grid@up3 ~]$ ssh-keygen -t rsa
[grid@up3 ~]$ ssh-keygen -t dsa
节点1操作
[grid@up1 ~]$ cd .ssh
[grid@up1 .ssh]$ scp authorized_keys up3:/home/grid/.ssh
节点3
操作
[grid@up3 .ssh]$ ls
authorized_keys id_dsa id_dsa.pub id_rsa id_rsa.pub
[grid@up3 .ssh]$ cat id_dsa.pub >> authorized_keys
[grid@up3 .ssh]$ cat id_rsa.pub >> authorized_keys
[grid@up3 .ssh]$ scp authorized_keys up2:/home/grid/.ssh
[grid@up3 .ssh]$ scp authorized_keys up1:/home/grid/.ssh
[grid@up3 .ssh]$ ssh up1.node.com date; ssh up2.node.com date; ssh up3.node.com date;
[grid@up3 .ssh]$ ssh up1 date; ssh up2 date; ssh up3 date
节点1
操作
[grid@up1 .ssh]$ ssh up1.node.com date; ssh up2.node.com date; ssh up3.node.com date;
[grid@up1 .ssh]$ ssh up1 date; ssh up2 date; ssh up3 date;
节点2
操作
[grid@up2 .ssh]$ ssh up1.node.com date; ssh up2.node.com date; ssh up3.node.com date;
[grid@up2 .ssh]$ ssh up1 date; ssh up2 date; ssh up3 date;
配置oracle
用户的互信
[oracle@up3 ~]$ ssh-keygen -t rsa
[oracle@up3 ~]$ ssh-keygen -t dsa
节点1
操作
[oracle@up1 ~]$ cd .ssh
[oracle@up1 bin]$ scp /home/oracle/.ssh/authorized_keys up3:/home/oracle/.ssh
节点3
操作
[oracle@up3 .ssh]$ ls
authorized_keys id_dsa id_dsa.pub id_rsa id_rsa.pub
[oracle@up3 .ssh]$ cat id_rsa.pub >> authorized_keys
[oracle@up3 .ssh]$ cat id_dsa.pub >> authorized_keys
[oracle@up3 .ssh]$ scp authorized_keys up1:/home/oracle/.ssh
[oracle@up3 .ssh]$ scp authorized_keys up2:/home/oracle/.ssh
[oracle@up3 .ssh]$ ssh up1.node.com date; ssh up2.node.com date; ssh up3.node.com date;
[oracle@up3 .ssh]$ ssh up1 date; ssh up2 date; ssh up3 date
节点1操作
[oracle@up1 .ssh]$ ssh up1.node.com date; ssh up2.node.com date; ssh up3.node.com date;
[oracle@up1 .ssh]$ ssh up1 date; ssh up2 date; ssh up3 date;
节点2
操作
[oracle@up2 .ssh]$ ssh up1.node.com date; ssh up2.node.com date; ssh up3.node.com date;
[oracle@up2 .ssh]$ ssh up1 date; ssh up2 date; ssh up3 date;
10.安装指定lib
包
节点3
:
[root@up3 src]# pwd
/usr/local/src
tar zxvf rlwrap-0.32.tar.gz
[root@up3 src]# ls
rlwrap-0.32
rlwrap-0.32.tar.gz
pdksh-5.2.14-37.el5_8.1.x86_64.rpm //注意:pdksh这个包貌似必须安装,第一次做添加节点试验的时候,并没有安装这个包,结果在节点1复制数据到节 点3操作的过程中发生了报错。也有可能是没建立升级之前旧的目录。
(这个包的位置在teacher_han---》RAC文件夹里)
安装插件包 rlwrap-0.32.tar.gz
cd rlwrap-0.32
./configure
make ; make install
rpm -ivh pdksh-5.2.14-37.el5_8.1.x86_64.rpm
安装cvuqdisk
和包:
节点1
:
[root@up1 rpm]# pwd
/usr/local/src/grid/rpm
[root@up1 rpm]# ls
cvuqdisk-1.0.9-1.rpm
[root@up1 rpm]# scp cvuqdisk-1.0.9-1.rpm up3:/usr/local/src/
[root@up1 rpm]# ssh up3
[root@up3 ~]# cd /usr/local/src/
[root@up3 src]# rpm -ivh cvuqdisk-1.0.9-1.rpm
节点3
:
检查:/lib64/libcap.so.1
find / -name libcap*
cd /lib64/
ls -lrt libcap*
ln -s libcap.so.2.16 libcap.so.1
二.up3
节点安装grid
(集群软件)
11.检查up3
是否满足rac
安装条件(在已经有节点下面用grid
,oracle
用户执行)
主节点 up1
执行:
su - grid
cluvfy stage -pre nodeadd -n up3 -fixup -verbose
cluvfy stage -post hwos -n up3
没有报错最好。
如果有下面报错:
WARNING:
PRVF-5640 : Both search and domain entries are present in file "/etc/resolv.conf" on the following nodes: up1,up3
Checking if domain entry in file "/etc/resolv.conf" is consistent across the nodes...
domain entry in file "/etc/resolv.conf" is consistent across nodes
Checking file "/etc/resolv.conf" to make sure that only one domain entry is defined
All nodes have one domain entry defined in file "/etc/resolv.conf"
Checking all nodes to make sure that domain is "node.com" as found on node "up1"
All nodes of the cluster have same value for 'domain'
Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes...
search entry in file "/etc/resolv.conf" is consistent across nodes
Checking file "/etc/resolv.conf" to make sure that only one search entry is defined
All nodes have one search entry defined in file "/etc/resolv.conf"
Checking all nodes to make sure that search entry is "node.com" as found on node "up1"
All nodes of the cluster have same value for 'search'
Checking DNS response time for an unreachable node
Node Name Status
------------------------------------ ------------------------
up1 failed
up3 failed
PRVF-5637 : DNS response time could not be checked on following nodes: up1,up3
File "/etc/resolv.conf" is not consistent across nodes
Pre-check for node addition was unsuccessful on all the nodes.
解决如下:
上面的是/etc/resolv.conf关于解析,这种无关紧要的小问题检查不通过,在图形界面安装时是可以忽略的,这里是不能直接忽略的,
需要修改一下addNode.sh文件,修改完这个脚本,不在执行上面校验。
还是主节点 up1执行:
vim $ORACLE_HOME/oui/bin/addNode.sh
#!/bin/sh
OHOME=/u01/app/11.2.0/grid
INVPTRLOC=$OHOME/oraInst.loc
EXIT_CODE=0
ADDNODE="$OHOME/oui/bin/runInstaller -addNode -invPtrLoc $INVPTRLOC ORACLE_HOME=$OHOME $*"
if [ "$IGNORE_PREADDNODE_CHECKS" = "Y" -o ! -f "$OHOME/cv/cvutl/check_nodeadd.pl" ]
then
$ADDNODE
EXIT_CODE=$ ;
else
CHECK_NODEADD="$OHOME/perl/bin/perl $OHOME/cv/cvutl/check_nodeadd.pl -pre ORACLE_HOME=$OHOME $*"
$CHECK_NODEADD
EXIT_CODE=$ ;
<span style="color:#ff0000;">EXIT_CODE=0 ##在这里添加一行,用于忽略一些小错误</span>
if [ $EXIT_CODE -eq 0 ]
then
$ADDNODE
EXIT_CODE=$ ;
fi
fi
exit $EXIT_CODE ;
12.添加新节点的软件
在已经有节点下面执行这个命令添加新节点的集群软件(grid
用户执行)
主节点 up1
执行:
su - grid
export IGNORE_PREADDNODE_CHECKS=Y
$ORACLE_HOME/oui/bin/addNode.sh -silent "CLUSTER_NEW_NODES={up3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={up3vip}" "CLUSTER_NEW_PRIVATE_NODE_NAMES={up3priv}"
运行提示的root.sh
脚本在节点 up3
用root
用户执行:
/u01/app/oraInventory/orainstRoot.sh #On nodes up3
/u01/app/11.2.0/grid/root.sh #On nodes up3
#在新节点up3用root用户执行
第1个脚本:
The execution of the script is complete.
看到上面,说明执行OK
第2个脚本:
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
看到上面,说明执行OK
有待验证:(第二个脚本如果出错,重新执行root.sh
之前别忘了删除配置:/u01/app/11.2.0/grid/crs/install/roothas.pl -deconfig -force -verbose
)
13.验证集群软件是否添加成功
看看新安装的up3
和已经存在的up2
校验对比,结果报错,但是已经安装成功了。
cluvfy stage -post nodeadd -n up3 -verbose
cluvfy stage -post nodeadd -n up2 -verbose
[grid@up2 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....ELOG.dg ora....up.type ONLINE ONLINE up1
ora.DATA1.dg ora....up.type ONLINE ONLINE up1
ora....ER.lsnr ora....er.type ONLINE ONLINE up1
ora....N1.lsnr ora....er.type ONLINE ONLINE up2
ora....N2.lsnr ora....er.type ONLINE ONLINE up3
ora....N3.lsnr ora....er.type ONLINE ONLINE up1
ora....VOTE.dg ora....up.type ONLINE ONLINE up1
ora.asm ora.asm.type ONLINE ONLINE up1
ora.cvu ora.cvu.type ONLINE ONLINE up1
ora.gsd ora.gsd.type OFFLINE OFFLINE
ora....network ora....rk.type ONLINE ONLINE up1
ora.oc4j ora.oc4j.type ONLINE ONLINE up1
ora.ons ora.ons.type ONLINE ONLINE up1
ora.racdb.db ora....se.type ONLINE ONLINE up1
ora.scan1.vip ora....ip.type ONLINE ONLINE up2
ora.scan2.vip ora....ip.type ONLINE ONLINE up3
ora.scan3.vip ora....ip.type ONLINE ONLINE up1
ora....SM1.asm application ONLINE ONLINE up1
ora....P1.lsnr application ONLINE ONLINE up1
ora.up1.gsd application OFFLINE OFFLINE
ora.up1.ons application ONLINE ONLINE up1
ora.up1.vip ora....t1.type ONLINE ONLINE up1
ora....SM2.asm application ONLINE ONLINE up2
ora....P2.lsnr application ONLINE ONLINE up2
ora.up2.gsd application OFFLINE OFFLINE
ora.up2.ons application ONLINE ONLINE up2
ora.up2.vip ora....t1.type ONLINE ONLINE up2
ora....SM3.asm application ONLINE ONLINE up3
ora....P3.lsnr application ONLINE ONLINE up3
ora.up3.gsd application OFFLINE OFFLINE
ora.up3.ons application ONLINE ONLINE up3
ora.up3.vip ora....t1.type ONLINE ONLINE up3
三.up3
节点安装数据库软件
14.节点3
安装oracle
软件
主节点 up1
:
su - oracle
$ORACLE_HOME/oui/bin/addNode.sh -silent "CLUSTER_NEW_NODES={up3}"
运行提示的root.sh
脚本在新节点 up3
用root
用户执行
/u01/app/oracle/product/11.2.0/db_1/root.sh
15.节点3
添加实例
主节点 up1
:
oracle@rac1 ~]$ dbca
或用命令行直接添加实例(在已经有节点下面用oracle用户执行) – 建议用命令直接添加实例;注意sys密码根据自己设置的来!!!
su - oracle
dbca -silent -addInstance -nodeList up3 -gdbName racdb -instanceName racdb3 -sysDBAUserName sys -sysDBAPassword "oracle"
有如下想显示:
Adding instance
1% complete
2% complete
6% complete
13% complete
20% complete
26% complete
33% complete
40% complete
46% complete
53% complete
66% complete
Completing instance management.
76% complete
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/racdb/racdb.log" for further details.
四.校验:
1
节点:
SYS@racdb1> select INST_ID,INSTANCE_NAME,STATUS from gv$instance;
INST_ID INSTANCE_NAME STATUS
---------- ---------------- ------------
1 racdb1 OPEN
3 racdb3 OPEN
2 racdb2 OPEN
SYS@racdb1> conn scott/tiger
Connected.
SCOTT@racdb1> create table d1 as select * from dept;
Table created.
3节点:
[oracle@up3 ~]$ sqlplus / as sysdba
SYS@racdb3> select INST_ID,INSTANCE_NAME,STATUS from gv$instance;
INST_ID INSTANCE_NAME STATUS
---------- ---------------- ------------
3 racdb3 OPEN
2 racdb2 OPEN
1 racdb1 OPEN
SYS@racdb3> conn scott/tiger
Connected.
SCOTT@racdb3> select * from d1;
DEPTNO DNAME LOC
---------- -------------- -------------
10 ACCOUNTING NEW YORK
20 RESEARCH DALLAS
30 SALES CHICAGO
40 OPERATIONS BOSTON
SCOTT@racdb3> exit
[oracle@up3 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....ELOG.dg ora....up.type ONLINE ONLINE up1
ora.DATA1.dg ora....up.type ONLINE ONLINE up1
ora....ER.lsnr ora....er.type ONLINE ONLINE up1
ora....N1.lsnr ora....er.type ONLINE ONLINE up2
ora....N2.lsnr ora....er.type ONLINE ONLINE up3
ora....N3.lsnr ora....er.type ONLINE ONLINE up1
ora....VOTE.dg ora....up.type ONLINE ONLINE up1
ora.asm ora.asm.type ONLINE ONLINE up1
ora.cvu ora.cvu.type ONLINE ONLINE up1
ora.gsd ora.gsd.type OFFLINE OFFLINE
ora....network ora....rk.type ONLINE ONLINE up1
ora.oc4j ora.oc4j.type ONLINE ONLINE up1
ora.ons ora.ons.type ONLINE ONLINE up1
ora.racdb.db ora....se.type ONLINE ONLINE up1
ora.scan1.vip ora....ip.type ONLINE ONLINE up2
ora.scan2.vip ora....ip.type ONLINE ONLINE up3
ora.scan3.vip ora....ip.type ONLINE ONLINE up1
ora....SM1.asm application ONLINE ONLINE up1
ora....P1.lsnr application ONLINE ONLINE up1
ora.up1.gsd application OFFLINE OFFLINE
ora.up1.ons application ONLINE ONLINE up1
ora.up1.vip ora....t1.type ONLINE ONLINE up1
ora....SM2.asm application ONLINE ONLINE up2
ora....P2.lsnr application ONLINE ONLINE up2
ora.up2.gsd application OFFLINE OFFLINE
ora.up2.ons application ONLINE ONLINE up2
ora.up2.vip ora....t1.type ONLINE ONLINE up2
ora....SM3.asm application ONLINE ONLINE up3
ora....P3.lsnr application ONLINE ONLINE up3
ora.up3.gsd application OFFLINE OFFLINE
ora.up3.ons application ONLINE ONLINE up3
ora.up3.vip ora....t1.type ONLINE ONLINE up3
3
节点:
su - oracle
cd $ORACLE_HOME/sqlplus/admin
vim glogin.sql
set sqlprompt '_user"@"_connect_identifier> '
define _editor = 'vim'