安装oracle 10g 双节点 集群

  主机配置注意事项(每个节点):
每个节点服务器必须是 双网卡 ,支持tcp/ip协议,安装集群软件服务器要支持udp

防火墙关闭
service iptables stop

SELinux禁用
vi /etc/selinux/config
SELINUX=disable

ip地址使用静态配置:static
网关要指定
# Intel Corporation 82566MM Gigabit Network Connection
DEVICE=eth0
BOOTPROTO=static
IPADDR=10.1.1.135
NETMASK=255.255.255.0
GATEWAY=10.1.3.1
HWADDR=00:1E:37:D6:FA:44
ONBOOT=yes

重启network :service network restart

hostname不要出现在回环地址!

如果启动过单机asm服务,请先停止:$ORACLE_HOME/bin/localconfig delete

卸载独占模式的oracle软件(先用OUI卸载,再手工清理垃圾文件/etc/*.ora $ORACLE_HOME)

如果是4以上的版本,降低版本到4
/etc/redhat-release

----------------------------------------------------------------------------------------------------------

配置etc/hosts (所有节点)

# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1               localhost.localdomain localhost
::1             localhost6.localdomain6 localhost6
# public Network - (eth0)
10.1.1.135 xie.uplooking1.com
10.1.1.132 xie.uplooking.com
# public virtual IP (eth0:#)
10.1.1.136 xie.uplooking1.com-vip
10.1.1.133 xie.uplooking.com-vip
# private Interconnect - (eth0:0)
10.1.2.135 xie.uplooking1.com-priv
10.1.2.132 xie.uplooking.com-priv

配置 ifcfg-eth0:0 (所有节点)
[root@xie network-scripts]# vi ifcfg-eth0:0
# Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+
DEVICE=eth0:0
BOOTPROTO=static
ONBOOT=yes
IPADDR=10.1.2.132
NETMASK=255.255.255.0

重启network:service network restart

----------------------------------------------------------------------------------------
配置hangcheck-timer:用于监视 Linux 内核是否挂起
vi /etc/modprobe.conf
options hangcheck-timer hangcheck_tick=30 hangcheck_margin=180
自动加载hangcheck-timer
vi /etc/rc.local
modprobe hangcheck-timer
检查hangcheck-timer模块是否已经加载:
lsmod | grep hangcheck_timer
-------------------------------------------------------------------------------------------------------------------------------
创建oracle用户:
跑脚本:
1./install.sh
#/bin/bash
. ./adduser.sh
. ./sysctl.sh
. ./limits.sh
. ./mkdir.sh
. ./chprofile.sh
2.adduser.sh
#/bin/bash
ADDGROUPS="oinstall dba"
ADDUSERS="oracle"

for group in $ADDGROUPS ; do

        if [ -z "$( awk -F: '{print $1}' /etc/group |grep $group)" ]; then
                 groupadd   $group
                 echo " Add new group $group"
        else
                 echo " Group $group already existed"
        fi
done

for user in $ADDUSERS ; do

        if [ -z "$( awk -F: '{print $1}' /etc/passwd |grep $user)" ]; then
                 useradd   $user
                 echo " Add new user $user"
        else
                 echo " User $user already existed"
        fi
done
if $(usermod -g oinstall -G dba oracle) ;  then
   echo " Modify user oracle account success"
else
   echo " Modify user oracle account failure"
fi
3.sysctl.sh
#/bin/bash
# echo 250 32000 100 128 > /proc/sys/kernel/sem
# echo 536870912 > /proc/sys/kernel/shmmax
# echo 4096 > /proc/sys/kernel/shmmni
# echo 2097152 > /proc/sys/kernel/shmall
# echo 65536 > /proc/sys/fs/file-max
# echo 1024 65000 > /proc/sys/net/ipv4/ip_local_port_range

SYSCTL_FILE="/etc/sysctl.conf"
RCLOCAL_FILE="/etc/rc.local"


if [ -f "$SYSCTL_FILE" ] ; then
        if [ -z "$(grep "Oracle" $SYSCTL_FILE)" ] ; then
                cat >>$SYSCTL_FILE << END
#Oracle configure kernel parameters
kernel.shmmax = 2147483648
kernel.shmmni = 4096
kernel.shmall = 2097152
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default = 262144
net.core.rmem_max = 262144
net.core.wmem_default = 262144
net.core.wmem_max = 262144

END
                /sbin/sysctl -p
                echo " Add Oracle configure kernel parameters success"
        else
                echo " Oracle configure kernel parameters already existed"
        fi
else
        if [ -z "$(grep "Oracle" $RCLOCAL_FILE)" ] ; then
                cat >>$RCLOCAL_FILE << END
#Oracle configure kernel parameters
echo 536870912 > /proc/sys/kernel/shmmax
echo 4096 > /proc/sys/kernel/shmmni
echo 2097152 > /proc/sys/kernel/shmall
echo 250 32000 100 128 > /proc/sys/kernel/sem
echo 65536 > /proc/sys/fs/file-max
echo 1024 65000 > /proc/sys/net/ipv4/ip_local_port_range
END
                . $RCLOCAL_FILE
                echo " Add Oracle configure kernel parameters success"
        else
                echo " Oracle configure kernel parameters already existed"
        fi
fi

4.limits.sh
#/bin/bash
LIMITS_FILE="/etc/security/limits.conf"
if [ -f "$LIMITS_FILE" ] ; then
        if [ -z "$(grep "Oracle" $LIMITS_FILE)" ] ; then
                cat >>$LIMITS_FILE << END
#Oracle configure  shell parameters
oracle soft nofile 65536
oracle hard nofile 65536
oracle soft nproc 16384
oracle hard nproc 16384
END
                echo " Add Oracle configure  shell parameters success"
        else
                echo " Oracle configure  shell parameters already existed"
        fi
else
        echo "$0: $LIMITS_FILE not found "
fi

5.mkdir.sh
#/bin/bash
ORACLE_FILE_BASE="/u01/app/oracle"
ORACLE_FILE_VAR="/var/opt/oracle"
ORACLE_FILE_HOME="$ORACLE_FILE_BASE/product/10.2.0/db_1"

for directory in $ORACLE_FILE_BASE $ORACLE_FILE_VAR $ORACLE_FILE_HOME ; do
        if [ -d $directory ]; then
                echo " Directory $directory  already existed"
        else
                mkdir -p $directory
                chown -R oracle.dba $directory
                echo " Change directory $directory owner and group success"
        fi
done

6.chprofile.sh
#/bin/bash
PROFILES="/home/oracle/.bashrc"
for PROFILE in $PROFILES ; do
if [ -f "$PROFILE" ] ; then
        if [ -z "$(grep "Oracle" $PROFILE)" ] ; then
                cat >>$PROFILE << END
# Oracle configure profile parameters success
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=\$ORACLE_BASE/product/10.2.0/db_1
export CRS_HOME=/u01/crs_1
export PATH=\$ORACLE_HOME/bin:\$PATH
export ORACLE_OWNER=oracle
export ORACLE_SID=racdb1
export ORACLE_TERM=vt100
export THREADS_FLAG=native
export LD_LIBRARY_PATH=\$ORACLE_HOME/lib:\$LD_LIBRARY_PATH
export PATH=\$ORACLE_HOME/bin:\$PATH
export SQLPATH=/home/oracle
export EDITOR=vi
alias sqlplus='rlwrap sqlplus'
alias lsnrctl='rlwrap lsnrctl'
alias rman='rlwrap rman'
alias asmcmd='rlwrap asmcmd'
#
# change this NLS settings to suit your country:
# example:
# german_germany.we8iso8859p15, american_america.we8iso8859p2 etc.
#
export LANG=en_US
END
                echo " Add Oracle configure $PROFILE parameters success"
        else
                echo " Oracle configure $PROFILE parameters already existed"
        fi
else
        echo "$0: $PROFILE not found "
fi
done


为oracle用户设置口令:oracle
---------------------------------------------------------------
在所有节点修改/u01 权限
chown oracle.oinstall /u01 -R
----------------------------------------------------------------------------------------
配置信任关系:
stu90:10.1.1.132
su - oracle
ssh-keygen -t rsa
ssh-keygen -t dsa
cd .ssh
cat *.pub > authorized_keys

stu92:10.1.1.135
su - oracle
ssh-keygen -t rsa
ssh-keygen -t dsa
cd .ssh
cat *.pub > authorized_keys

stu90:10.1.1.132
scp authorized_keys
oracle@10.1.1.135:/home/oracle/.ssh/keys_dbs

stu92:10.1.1.135
cat keys_dbs >> authorized_keys
scp authorized_keys
oracle@10.1.1.132:/home/oracle/.ssh/

测试信任关系:
xie.uplooking.com:
ssh xie.uplooking.com
ssh xie1.uplooking.com
ssh xie-priv.uplooking.com
ssh xie1-priv.uplooking.com

xie1.uplooking.com
ssh xie.uplooking.com
ssh xie1.uplooking.com
ssh xie-priv.uplooking.com
ssh xie1-priv.uplooking.com
---------------------------------------------------------------------------------------------------------------
测试时间同步:
---------------------------------------------------------------
准备公用卷:iscsi
iscsi server --> stu90
yum install scsi-target-utils

vi /etc/tgt/targets.conf
----------------------------------------
 <target iqn.2011-01.com.oracle.blues:luns1>
        backing-store /dev/sda5
       initiator-address 10.1.1.0/24
 </target>
----------------------------------------

vi /etc/udev/rules.d/55-openiscsi.rules
-----------------------------------------------
KERNEL=="sd*",BUS=="scsi",PROGRAM="/etc/udev/scripts/iscsidev.sh %b",SYMLINK+="iscsi/%c"
-----------------------------------------------

vi /etc/udev/scripts/iscsidev.sh
----------------------------------------
#!/bin/bash
 BUS=${1}
 HOST=${BUS%%:*}
 [ -e /sys/class/iscsi_host ] || exit 1
 file="/sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session*/targetname"
 target_name=$(cat ${file})
 if [ -z "${target_name}" ] ; then
        exit 1
 fi
 echo "${target_name##*:}"
----------------------------------------

chmod +x /etc/udev/scripts/iscsidev.sh

chkconfig iscsi off
chkconfig iscsid off
chkconfig tgtd off

service iscsi start
service iscsid start
service tgtd start

tgtadm --lld iscsi --op bind --mode target --tid 1 -I ALL
iscsiadm -m discovery -t sendtargets -p 10.1.1.xx
service iscsi start
fdisk -l

/*************************************************
重新扫描服务器
iscsiadm -m session -u
iscsiadm -m discovery -t sendtargets -p 10.1.1.103
**************************************************/

iscsi client:10.1.1.92
vi /etc/udev/rules.d/55-openiscsi.rules
-----------------------------------------------
KERNEL=="sd*",BUS=="scsi",PROGRAM="/etc/udev/scripts/iscsidev.sh %b",SYMLINK+="iscsi/%c"
-----------------------------------------------

vi /etc/udev/scripts/iscsidev.sh
----------------------------------------
#!/bin/bash
BUS=${1}
HOST=${BUS%%:*}
[ -e /sys/class/iscsi_host ] || exit 1
file="/sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session*/targetname"
target_name=$(cat ${file})
if [ -z "${target_name}" ] ; then
       exit 1
fi
echo "${target_name##*:}"
----------------------------------------

chmod +x /etc/udev/scripts/iscsidev.sh

service iscsi start
iscsiadm -m discovery -t sendtargets -p 10.1.1.xx -l
service iscsi start
fdisk -l

对iscsi共享盘分区:
fdisk /dev/sdb
在所有节点:partprobe /dev/sdb

在所有节点将iscsi共享分区变为裸设备:
vi /etc/udev/rules.d/60-raw.rules
-------------------------------------
ACTION=="add", KERNEL=="sdb1", RUN+="/bin/raw /dev/raw/raw1 %N"
ACTION=="add", KERNEL=="sdb2", RUN+="/bin/raw /dev/raw/raw2 %N"
ACTION=="add", KERNEL=="sdb5", RUN+="/bin/raw /dev/raw/raw3 %N"
ACTION=="add", KERNEL=="sdb6", RUN+="/bin/raw /dev/raw/raw4 %N"
KERNEL=="raw[1]", MODE="0660", GROUP="oinstall", OWNER="root"
KERNEL=="raw[2]", MODE="0660", GROUP="oinstall", OWNER="oracle"
KERNEL=="raw[3]", MODE="0660", GROUP="oinstall", OWNER="oracle"
KERNEL=="raw[4]", MODE="0660", GROUP="oinstall", OWNER="oracle"

在所有节点重新启动udev
start_udev

在所有节点查看rawdevices
ll /dev/raw/
-----------------------------------------------------------------------------------------
集群安装可行性校验:
cd /mnt
tar -zxvf clusterware10GR2_32.tar.gz
chown oracle.oinstall clusterware -R

su - oracle
cd /mnt/clusterware/cluvfy/

./runcluvfy.sh stage -pre crsinst -n xie,xie1 -verbose

报错:
[oracle@xie1 cluvfy]$ ./runcluvfy.sh stage -pre crsinst -n xie.uplooking.com,xie1.uplooking.com -verbose


Performing pre-checks for cluster services setup

Checking node reachability...

Check: Node reachability from node "xie1"
  Destination Node                      Reachable?             
  ------------------------------------  ------------------------
  xie1                                  yes                    
  xie                                   yes                    
Result: Node reachability check passed from node "xie1"

把 所有节点 vi /etc/hosts 改成:
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1               localhost.localdomain localhost
::1             localhost6.localdomain6 localhost6
# public Network - (eth0)
10.1.1.135 xie1.uplooking.com xie1
10.1.1.132 xie.uplooking.com xie
# public virtual IP (eth0:#)
10.1.1.136 xie1-vip
10.1.1.133 xie-vip
# private Interconnect - (eth0:0)
10.1.2.135 xie1-priv
10.1.2.132 xie-priv

然后在重新测试信任关系
xie.uplooking.com:
ssh xie
ssh xie1
ssh xie-priv
ssh xie1-priv

xie1.uplooking.com
ssh xie
ssh xie1
ssh xie-priv
ssh xie1-priv

检查成功:(下面这4个包在5版本不能装上,没关系)
Check: Package existence for "compat-gcc-7.3-2.96.128"
  Node Name                       Status                          Comment        
  ------------------------------  ------------------------------  ----------------
  xie                             missing                         failed         
  xie1                            missing                         failed         
Result: Package existence check failed for "compat-gcc-7.3-2.96.128".

Check: Package existence for "compat-gcc-c++-7.3-2.96.128"
  Node Name                       Status                          Comment        
  ------------------------------  ------------------------------  ----------------
  xie                             missing                         failed         
  xie1                            missing                         failed         
Result: Package existence check failed for "compat-gcc-c++-7.3-2.96.128".

Check: Package existence for "compat-libstdc++-7.3-2.96.128"
  Node Name                       Status                          Comment        
  ------------------------------  ------------------------------  ----------------
  xie                             missing                         failed         
  xie1                            missing                         failed         
Result: Package existence check failed for "compat-libstdc++-7.3-2.96.128".

Check: Package existence for "compat-libstdc++-devel-7.3-2.96.128"
  Node Name                       Status                          Comment        
  ------------------------------  ------------------------------  ----------------
  xie                             missing                         failed         
  xie1                            missing                         failed         
Result: Package existence check failed for "compat-libstdc++-devel-7.3-2.96.128".

---------------------------------------------------------------
安装clusterware软件(只需在一个节点做,但要手工将其它节点加入到群):
/mnt/clusterware/runInstaller

在运行/u01/crs_1/root.sh脚本之前,在所有节点修改vipca & srvctl
su - oracle
cd $CRS_HOME/bin
vi +123 vipca
vi + srvctl
~~~~~~~~~~~~~~~~
unset LD_ASSUME_KERNEL

/u01/crs_1/root.sh

如果报错:
Running vipca(silent) for configuring nodeapps
Error 0(Native: listNetInterfaces:[3])
  [Error 0(Native: listNetInterfaces:[3])]
解决(只需在一个节点做):
cd /u01/crs_1/bin
#./oifcfg iflist
#./oifcfg setif -global eth0/10.1.1.0:public       
#./oifcfg setif -global eth0:0/10.1.2.0:cluster_interconnect
#./oifcfg getif

手工运行vipca,完成root.sh脚本!

校验集群后台进程的状态:
cd /u01/crs_1/bin

[oracle@xie bin]$ ./crs_stat -t
Name           Type           Target    State     Host       
------------------------------------------------------------
ora.xie.gsd    application    ONLINE    ONLINE    xie        
ora.xie.ons    application    ONLINE    ONLINE    xie        
ora.xie.vip    application    ONLINE    ONLINE    xie        
ora.xie1.gsd   application    ONLINE    ONLINE    xie1       
ora.xie1.ons   application    ONLINE    ONLINE    xie1       
ora.xie1.vip   application    ONLINE    ONLINE    xie1    

在root下备份OCR:
cd /u01/crs_1/bin
./ocrconfig -export /home/oracle/bk/ocr/ocr1.bk

------------------------------------------------------------------------------------------
安装数据库软件(只需在一个节点做,会出现多节点的选择选项):安装时选择只安装软件不建库
/mnt/database/runInstaller

配置集群的数据库网络:
netca
--------------------------------------------------------
打补丁:
1.在所有节点用root用户停集群
[root@xie bin]# /etc/init.d/init.crs stop

[oracle@xie bin]$  ./crs_stat -t
CRS-0184: Cannot communicate with the CRS daemon.

2.运行./runInstaller打补丁
  1. 先为clusterware 打补丁 ,运行./runInstaller 选择(完成之后根据提示运行2个脚本)
打完补丁后,会自动启动集群服务
[oracle@xie1 bin]$ ./crs_stat -t
Name           Type           Target    State     Host       
------------------------------------------------------------
ora....IE.lsnr application    ONLINE    ONLINE    xie        
ora.xie.gsd    application    ONLINE    ONLINE    xie        
ora.xie.ons    application    ONLINE    ONLINE    xie        
ora.xie.vip    application    ONLINE    ONLINE    xie        
ora....E1.lsnr application    ONLINE    ONLINE    xie1       
ora.xie1.gsd   application    ONLINE    ONLINE    xie1       
ora.xie1.ons   application    ONLINE    ONLINE    xie1       
ora.xie1.vip   application    ONLINE    ONLINE    xie1  

   2. 然后再为库打补丁,运行./runInstaller 选择
  [root@xie bin]# /etc/init.d/init.crs stop

[oracle@xie bin]$  ./crs_stat -t
CRS-0184: Cannot communicate with the CRS daemon.

这里装完之后不会自动起集群服务
----------------------------------------------------
安装数据库:
用dbca建库(在一台节点建库)

建好之后查询集群状态:

[oracle@xie bin]$ ./crs_stat -t
Name           Type           Target    State     Host       
------------------------------------------------------------
ora.racdb.db   application    ONLINE    ONLINE    xie        
ora....b1.inst application    ONLINE    ONLINE    xie        
ora....b2.inst application    ONLINE    ONLINE    xie1       
ora....SM1.asm application    ONLINE    ONLINE    xie        
ora....IE.lsnr application    ONLINE    ONLINE    xie        
ora.xie.gsd    application    ONLINE    ONLINE    xie        
ora.xie.ons    application    ONLINE    ONLINE    xie        
ora.xie.vip    application    ONLINE    ONLINE    xie        
ora....SM2.asm application    ONLINE    ONLINE    xie1       
ora....E1.lsnr application    ONLINE    ONLINE    xie1       
ora.xie1.gsd   application    ONLINE    ONLINE    xie1       
ora.xie1.ons   application    ONLINE    ONLINE    xie1       
ora.xie1.vip   application    ONLINE    ONLINE    xie1

RAC是一个完整的集群应用环境,它不仅实现了集群的功能,而且提供了运行在集群之上的应用程序,即Oracle数据库。无论与普通的集群相比,还是与普通的oracle数据库相比,RAC都有一些独特之处。 RAC由至少两个节点组成,节点之间通过公共网络和私有网络连接,其中私有网络的功能是实现节点之间的通信,而公共网络的功能是提供用户的访问。在每个节点上分别运行一个Oracle数据库实例和一个监听器,分别监听一个IP地址上的用户请求,这个地址称为VIP(Virtual IP)。用户可以向任何一个VIP所在的数据库服务器发出请求,通过任何一个数据库实例访问数据库。Clusterware负责监视每个节点的状态,如果发现某个节点出现故障,便把这个节点上的数据库实例和它所对应的VIP以及其他资源切换到另外一个节点上,这样可以保证用户仍然可通过这个VIP访问数据库。 在普通的Oracle数据库中,一个数据库实例只能访问一个数据库,而一个数据库只能被一个数据库实例打开。在RAC环境中,多个数据库实例同时访问同一个数据库,每个数据库实例分别在不同的节点上运行,而数据库存放在共享的存储设备上。 通过RAC,不仅可以实现数据库的并发访问,而且可以实现用户访问的负载均衡。用户可以通过任何一个数据库实例访问数据库,实例之间通过内部通信来保证事务的一致性。例如,当用户在一个实例修改数据时,需要对数据加锁。当另一个用户在其他实例中修改同样的数据时,便需要等待锁的释放。当前一个用户提交事务时,后一个用户立即可以得到修改之后的数据。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值