oracle 11gR2 RAC 安装

oracle 11gR2 RAC 安装   

安装文档参照了 http://www.oracle-base.com/articles/11g/OracleDB11gR2RACInstallationOnOEL5UsingVMwareServer2.php

rpm -q binutils \
       compat-libstdc++ \
       elfutils-libelf \
       elfutils-libelf-devel \
       gcc \
       gcc-c++ \
       glibc \
       glibc-common \
       glibc-devel \
       glibc-headers \
       ksh \
       libaio \
       libaio-devel \
       libgcc \
       libstdc++ \
       libstdc++-devel \
       make \
       sysstat \
       unixODBC \
       unixODBC-devel
      
# 安装所需包
      
cd /media/cdrom/Server
rpm -Uvh binutils-2.*
rpm -Uvh compat-libstdc++-33*
rpm -Uvh elfutils-libelf-0.*
rpm -Uvh elfutils-libelf-devel-*
rpm -Uvh gcc-4.*
rpm -Uvh gcc-c++-4.*
rpm -Uvh glibc-2.*
rpm -Uvh glibc-common-2.*
rpm -Uvh glibc-devel-2.*
rpm -Uvh glibc-headers-2.*
rpm -Uvh ksh-2*
rpm -Uvh libaio-0.*
rpm -Uvh libaio-devel-0.*
rpm -Uvh libgcc-4.*
rpm -Uvh libstdc++-4.*
rpm -Uvh libstdc++-devel-4.*
rpm -Uvh make-3.*
rpm -Uvh sysstat-7.*
rpm -Uvh unixODBC-2.*
rpm -Uvh unixODBC-devel-2.*
 
cat >> /etc/hosts << EOF
127.0.0.1 localhost.localdomain localhost
# Public
192.168.1.101 rac1.localdomain rac1
192.168.1.102 rac2.localdomain rac2
# Private
192.168.0.101 rac1-priv.localdomain rac1-priv
192.168.0.102 rac2-priv.localdomain rac2-priv
# Virtual
192.168.1.111 rac1-vip.localdomain rac1-vip
192.168.1.112 rac2-vip.localdomain rac2-vip
# SCAN
192.168.1.201 rac-scan.localdomain rac-scan
EOF
      
cat >> /etc/sysctl.conf << EOF      
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 536870912
kernel.shmmni = 4096
# semaphores: semmsl, semmns, semopm, semmni
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default=262144
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=1048586      
EOF
       
#内核生效      
/sbin/sysctl -p
      
cat >> /etc/security/limits.conf << EOF      
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
EOF
 
cat >> /etc/pam.d/login << EOF
session required pam_limits.so
EOF
 
#确认/etc/selinux/config 的 SELINUX=disabled

#不要 ntpd
service ntpd stop
chkconfig ntpd off
mv /etc/ntp.conf /etc/ntp.conf.org
rm /var/run/ntpd.pid


#创建 oracle 用户
groupadd -g 1000 oinstall
groupadd -g 1200 dba
useradd -u 1100 -g oinstall -G dba oracle
passwd oracle


#设定,修改密码
passwd oracle

mkdir -p /oracle/grid
mkdir -p /oracle/oraInventory
mkdir -p /oracle/app/ora11g
chown -R oracle:oinstall /oracle
chmod -R 775 /oracle


vi /home/oracle/.bash_profile
       

TMP=/tmp;                         export TMP
TMPDIR=$TMP;                      export TMPDIR
ORACLE_HOSTNAME=rac1.localdomain; export ORACLE_HOSTNAME
ORACLE_UNQNAME=oradb;             export ORACLE_UNQNAME
ORACLE_BASE=/oracle/app;          export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/ora11g;  export ORACLE_HOME
ORACLE_SID=oradb1;                export ORACLE_SID
ORACLE_TERM=xterm;                export ORACLE_TERM
PATH=/usr/sbin:$PATH;             export PATH
PATH=$ORACLE_HOME/bin:$PATH;      export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLA      SSPATH

if [ $USER = "oracle" ]; then
  if [ $SHELL = "/bin/ksh" ]; then
    ulimit -p 16384
    ulimit -n 65536
  else
    ulimit -u 16384 -n 65536
  fi
fi
     
     
查找内核,下载相应的asm 的rpm 
uname -a
      
http://www.oracle.com/technology/software/tech/linux/asmlib/rhel5.html
oracleasm-2.6.18-194.el5-2.0.5-1.el5.i686.rpm
oracleasmlib-2.0.4-1.el5.i386.rpm
oracleasm-support-2.1.3-1.el5.i386.rpm

#安装oracle asm lib
rpm -ivh oracleasm*

#配置asm
/etc/init.d/oracleasm configure

Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver:                     [  OK  ]
Scanning the system for Oracle ASMLib disks:               [  OK  ]

 

#增加一个磁盘sdb,分区,但不格式化
fdisk /dev/sdb

#--------------------------
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-2610, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-2610, default 2610):
Using default value 2610

Command (m for help): w
#---------------------------
#创建asm磁盘
/etc/init.d/oracleasm createdisk DISK1 /dev/sdb1


附加oracleasm的其他命令。
-------------------------------------------------
更新一下
/etc/init.d/oracleasm update-driver
查询磁盘是否被使用
/etc/init.d/oracleasm querydisk /dev/sda1
显示asm磁盘
/etc/init.d/oracleasm listdisks
删除磁盘
/etc/init.d/oracleasm deletedisk DISK1
--------------------------------------------------


安装 grid
/mnt/cdrom/runInstaller
install and configure grid infrastructure for a cluster
next
typical installation
next
scan name : rac-scan
add 节点2 hostname ,virtual IP name
SSH connectivity setup
test
next
oracle base : /oracle/app
software location : /oracle/grid
选DISK1
next
#检查配置
Finished 开始安装

安装到65%时自动将/oracle/grid copy 到 rac2

 

最后执行 /grid/root.sh 时发生错误

Failure with signal 11 from command: /grid/bin/ocrconfig -local -upgrade oracle oinstall
Failed to create or upgrade OLR

CRS-2106:The OLR location /oracle/grid/cdata/rac1.olr is inaccessible. Details in /oracle/grid/log/rac1/client/ocrconfig_7927.log.

网上很少这个问题的答案,有说firewall引起的也有说selinux引起的。
但我确信这两个都是关闭的。

也有说硬件部以致引起的。我的虚拟机copy的,完全一模一样。

也有说以下的方法

删除节点注册资源
/oracle/grid/crs/install/roothas.pl -delete -force -verbose

chcon -t texrel_shlib_t /oracle/grid/lib/libclntsh.so.11.1

/oracle/grid/perl/bin/perl -I /oracle/grid/perl/lib -I /oracle/grid/crs/install /oracle/grid/crs/install/roothas.pl -delete -force
/oracle/grid/perl/bin/perl -I /oracle/grid/perl/lib -I /oracle/grid/crs/install /oracle/grid/crs/install/roothas.pl

 make -f //oracle/grid/rdbms/lib/ins_rdbms.mk rac_on ioracle
但还是解决不了。

oracle-base上有说32bit的有这个问题,64位没问题。

困扰了3、4天,系统重装,asm新建,都处理不了。
装个64位的试试吧。一样的操作步骤,还真得很顺利


[root@rac1 ~]# /oracle/oraInventory/orainstRoot.sh
Changing permissions of /oracle/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /oracle/oraInventory to oinstall.
The execution of the script is complete.
[root@rac1 ~]# /oracle/grid/root.sh
Running Oracle 11g root.sh script...

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /oracle/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2010-06-07 19:51:10: Parsing the host name
2010-06-07 19:51:10: Checking for super user privileges
2010-06-07 19:51:10: User has super user privileges
Using configuration parameter file: /oracle/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
  root wallet
  root wallet cert
  root cert export
  peer wallet
  profile reader wallet
  pa wallet
  peer wallet keys
  pa wallet keys
  peer cert request
  pa cert request
  peer cert
  pa cert
  peer root cert TP
  profile reader root cert TP
  pa root cert TP
  peer pa cert TP
  pa peer cert TP
  profile reader pa cert TP
  profile reader peer cert TP
  peer user cert
  pa user cert
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
ADVM/ACFS is not supported on centos-release-5-5.el5.centos

 

CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'rac1'
CRS-2676: Start of 'ora.ctssd' on 'rac1' succeeded

ASM created and started successfully.

DiskGroup DATA created successfully.

clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-2672: Attempting to start 'ora.crsd' on 'rac1'
CRS-2676: Start of 'ora.crsd' on 'rac1' succeeded
CRS-4256: Updating the profile
Successful addition of voting disk 48db7efdcad74fa1bf826bfa35ebf1f6.
Successfully replaced voting disk group with +DATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   48db7efdcad74fa1bf826bfa35ebf1f6 (ORCL:DISK1) [DATA]
Located 1 voting disk(s).
CRS-2673: Attempting to stop 'ora.crsd' on 'rac1'
CRS-2677: Stop of 'ora.crsd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'
CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rac1'
CRS-2677: Stop of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'
CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'
CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'
CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'
CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'rac1'
CRS-2676: Start of 'ora.ctssd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac1'
CRS-2676: Start of 'ora.crsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'rac1'
CRS-2676: Start of 'ora.evmd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'rac1'
CRS-2676: Start of 'ora.DATA.dg' on 'rac1' succeeded

rac1     2010/06/07 19:57:32     /oracle/grid/cdata/rac1/backup_20100607_195732.olr
Preparing packages for installation...
cvuqdisk-1.0.7-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Updating inventory properties for clusterware
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 3760 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /oracle/oraInventory
'UpdateNodeList' was successful.

#---------------------------------------------------------
RAC2
在RAC1还在执行root.sh中的时候,在rac2执行


[root@rac2 ~]# /oracle/grid/root.sh
Running Oracle 11g root.sh script...

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /oracle/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2010-06-07 19:47:06: Parsing the host name
2010-06-07 19:47:06: Checking for super user privileges
2010-06-07 19:47:06: User has super user privileges
Using configuration parameter file: /oracle/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
  root wallet
  root wallet cert
  root cert export
  peer wallet
  profile reader wallet
  pa wallet
  peer wallet keys
  pa wallet keys
  peer cert request
  pa cert request
  peer cert
  pa cert
  peer root cert TP
  profile reader root cert TP
  pa root cert TP
  peer pa cert TP
  pa peer cert TP
  profile reader pa cert TP
  profile reader peer cert TP
  peer user cert
  pa user cert
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
ADVM/ACFS is not supported on centos-release-5-5.el5.centos

 

CRS-2672: Attempting to start 'ora.gipcd' on 'rac2'
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac2'
CRS-2676: Start of 'ora.mdnsd' on 'rac2' succeeded
CRS-2676: Start of 'ora.gipcd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac2'
CRS-2676: Start of 'ora.gpnpd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac2'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac2'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac2'
CRS-2676: Start of 'ora.diskmon' on 'rac2' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'rac2'
CRS-2676: Start of 'ora.ctssd' on 'rac2' succeeded

Disk Group DATA already exists. Cannot be created again

Configuration of ASM failed, see logs for details
Did not succssfully configure and start ASM
CRS-2500: Cannot stop resource 'ora.crsd' as it is not running
CRS-4000: Command Stop failed, or completed with errors.
Command return code of 1 (256) from command: /oracle/grid/bin/crsctl stop resource ora.                          crsd -init
Stop of resource "ora.crsd -init" failed
Failed to stop CRSD
CRS-2500: Cannot stop resource 'ora.asm' as it is not running
CRS-4000: Command Stop failed, or completed with errors.
Command return code of 1 (256) from command: /oracle/grid/bin/crsctl stop resource ora.                          asm -init
Stop of resource "ora.asm -init" failed
Failed to stop ASM
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac2'
CRS-2677: Stop of 'ora.ctssd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rac2'
CRS-2677: Stop of 'ora.cssdmonitor' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac2'
CRS-2677: Stop of 'ora.cssd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2'
CRS-2677: Stop of 'ora.gpnpd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2'
CRS-2677: Stop of 'ora.gipcd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2'
CRS-2677: Stop of 'ora.mdnsd' on 'rac2' succeeded
Initial cluster configuration failed.  See /oracle/grid/cfgtoollogs/crsconfig/rootcrs_r                          ac2.log for details
[root@rac2 ~]# /oracle/grid/root.sh
Running Oracle 11g root.sh script...

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /oracle/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2010-06-07 19:50:31: Parsing the host name
2010-06-07 19:50:31: Checking for super user privileges
2010-06-07 19:50:31: User has super user privileges
Using configuration parameter file: /oracle/grid/crs/install/crsconfig_params
CRS is already configured on this node for crshome=0
Cannot configure two CRS instances on the same cluster.
Please deconfigure before proceeding with the configuration of new home.

#---------------------------------------------------
这个显示怪怪的,决定再次执行root.sh,先删除节点注册资源

[root@rac2 ~]# /oracle/grid/crs/install/roothas.pl -delete -force -verbose
2010-06-07 19:51:31: Checking for super user privileges
2010-06-07 19:51:31: User has super user privileges
2010-06-07 19:51:31: Parsing the host name
Using configuration parameter file: /oracle/grid/crs/install/crsconfig_params
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Stop failed, or completed with errors.
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Delete failed, or completed with errors.
CRS-4133: Oracle High Availability Services has been stopped.
ADVM/ACFS is not supported on centos-release-5-5.el5.centos

ACFS-9201: Not Supported
Successfully deconfigured Oracle Restart stack


#-----------------------------------------------------------
等rac1 的root.sh结束后
[root@rac2 ~]# /oracle/grid/root.sh
Running Oracle 11g root.sh script...

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /oracle/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2010-06-07 19:52:01: Parsing the host name
2010-06-07 19:52:01: Checking for super user privileges
2010-06-07 19:52:01: User has super user privileges
Using configuration parameter file: /oracle/grid/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
ADVM/ACFS is not supported on centos-release-5-5.el5.centos

 

CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac2'
CRS-2676: Start of 'ora.mdnsd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac2'
CRS-2676: Start of 'ora.gipcd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac2'
CRS-2676: Start of 'ora.gpnpd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac2'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac2'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac2'
CRS-2676: Start of 'ora.diskmon' on 'rac2' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'rac2'
CRS-2676: Start of 'ora.ctssd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac2'
CRS-2676: Start of 'ora.asm' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac2'
CRS-2676: Start of 'ora.crsd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'rac2'
CRS-2676: Start of 'ora.evmd' on 'rac2' succeeded

rac2     2010/06/07 20:03:52     /oracle/grid/cdata/rac2/backup_20100607_200352.olr
Preparing packages for installation...
cvuqdisk-1.0.7-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Updating inventory properties for clusterware
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4000 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /oracle/oraInventory
'UpdateNodeList' was successful.


返回安装
configure oracle grid infrastructure for a cluster failed
oracle cluster verification utility failed.
其他都successful
next
The installation of oracle grid infrastructure for a cluster was successful,but some configuration assistants failed ,were cancelled or skipped.

close

#继续安装数据库软件
/mnt/cdrom/runInstaller
第一个画面部输入email地址,有警告信息,跳过
仅安装软件
选择Real application clusters database installation
选择rac1,rac2.(rac1默认选择)
test ssh connectivity
next
selected language:english
next
next
oracle base :/oracle/app
software location /oracle/app/ora11g
next
next
#检查配置
clock synchronization failed
打开终端 执行 vmware-toolbox 同步系统时间。
还是这个错误
date -s '21'


/oracle/grid/inventory/Templates/bin/cluvfy comp clocksync -verbose

PRVF-9661 : Time offset is NOT within the specified limits on the following nodes:"[rac2]"


如果这样做还不行,则修改文件:C:\Documents and Settings\All Users\Application Data\VMware\VMware Server\config.ini
添加如下三行:
host.cpukHz = "2800000"
host.noTSC = "TRUE"
ptsc.noTSC = "TRUE"

其中:host.cpukHz要根据你的cpu的实际主频修改,例子中表示CPU主频是2.8G。

/etc/sysconfig/ntpd       

还是失败,vmware 能调时间的都调过了。没办法。最终放弃。

使用ntp吧。

If you are using NTP, you must add the "-x" option into the following line in the "/etc/sysconfig/ntpd" file.

vi /etc/sysconfig/ntpd

OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"  

恢复之前的
mv /etc/ntp.conf.org /etc/ntp.conf

service ntpd start

验证                                              
/oracle/grid/bin/crsctl check ctss            
/oracle/grid/bin/cluvfy comp clocksync -n all 
/oracle/grid/bin/cluvfy comp clocksync -verbose
虽然有提示时间补偿失败

PRVF-5413 : Node "rac1" has a time offset of -41698.0 that is beyond permissible limit of 1000.0 from NTP Time Server "209.81.9.7"
  rac1          -41698.0                  failed
但最终还是
Result: Clock synchronization check using Network Time Protocol(NTP) passed
Oracle Cluster Time Synchronization Services check passed
Verification of Clock Synchronization across the cluster nodes was successful.

好了,数据库软件继续安装。

安装中....等待....

[root@rac2 ~]# /oracle/app/ora11g/root.sh
Running Oracle 11g root.sh script...

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /oracle/app/ora11g

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
Finished product-specific root actions.

数据库安装完了,测试一下,嗯,开心

[oracle@rac1 ~]$ sqlplus /nolog

SQL*Plus: Release 11.2.0.1.0 Production on Tue Jun 8 20:54:04 2010

Copyright (c) 1982, 2009, Oracle.  All rights reserved.

SQL> exit

创建数据库
dbca
选择集群安装.

 

RAC--后台进程  

1.GSD global services daemon

   GSD与rac的管理工具dbca srvctl oem进行交互,用来完成实例的启动关闭等管理任务。

为了保证这些管理工具运行正常必须在所有的节点上先start gsd,并且一个GSD进程支持在

一个节点的多个rac.gsd进程位于$ORACLE_HOME/bin目录下,其log文件为$oracle_home

/srvm/log/gsdaemon.log

例如:

 假设使用oem工具来启动一个实例,oem把该任务传递给相应的智能引擎,该智能引擎生成

一个包含SRVCTL命令的脚本文件,GSD进程读取该脚本文件并且执行该脚本,最后GSD把

执行结果返回给智能引擎,近而智能引擎返回给OEM.

 又例如假设使用srvctl工具关闭所有的实例,GSD进程接受来自SRVCTL工具发出的请求,并

在本地节点上执行该请求,然后把执行结果返回给SRVCTL会话。

2.LMON:GLOBAL ENQUEUE SERVICE MONITOR

LMON主要监测群集内的全局队列和全局资源,管理实例和处理异常并相应的群集队列进行恢复操作。

3.LMD:GLOBAL ENQUEUE SERVICE DAEMON

LMD进程主要管理对全局队列和资源的访问,并更新相应队列的状态,处理来自于其他实例

的资源请求。每一个全局队列的当前状态存储在相应的实例共享内存中,该状态表明该实例

具有相应的权利使用该资源。一个实例(master)的共享内存中存在一个特殊的队列,该队

列纪录来自其他远程实例的资源请求,当远程实例的LMD进程发出一个资源请求时,该请求

指向master实例的LMD,当master实例的LMD进程受到该请求后,在共享内存中的特殊队列

中监测该资源是否无效,如果有效则LMD进程更新该资源对列的状态,并通知请求资源的

LMD进程该资源队列可以使用了,如果资源队列正在被其他实例使用或者当前无效,则

LMD进程通知正在使用中的实例的LMD进程应该释放该资源,等资源释放变得有效时,

MASTER实例的LMD进程更新该资源队列的状态并通知请求资源实例的LMD进程该资源

队列可以使用了。另外LMD进程也负责队列的死锁问题。。。

4.LMSn:GLOBAL CACHE SERVICE PROCESS(n 0~9)

LMS进程主要用来管理集群内数据块的访问,并在不同实例的BUFFER CACHE中传输块

镜像。LMS进程跨集群管理数据库的请求,并保证在所有实例的BUFFER CACHE中一个

数据块的镜像只能出现一次。LMS进程靠着在实例中传递消息来协调数据块的访问,当

一个实例请求数据块时,该实例的LMD进程发出一个数据块资源的请求,该请求只向

MASTER数据块的实例的LMD进程,MASTER实例的LMD进程同时正在使用的实例的LMD

进程释放该资源,这时拥有该资源的实例的LMS进程会创建一个数据块镜像的一致性读

然后把该数据块传递到请求该资源的实例的BUFFER CACHE中。

LMS进程保证了在每一时刻只能允许一个实例去更新数据块,并负责保持该数据块的

镜像纪录(包含更新数据块的状态FLAG),RAC提供了10个LMS进程,该进程数量随着

节点间的消息传递的数据的增加而增加。

5.LCK

LCK进程主要用来管理实例间资源请求和跨实例调用操作,调用操作包括数据字典等对象

的访问;并处理出CACEH FUSION的CHACE资源请求(例如:DICTIONARY CACHE)

6.DIAG:DIAGNOSABILITY DAEMON

DIAG进程主要用来捕获实例中失败进程的诊断信息,并生成相应的TRACE文件(该trace

文件保存在backupground_dump_dest/cdmp_timestamp目录下),该进程不

需要进行配置更不应该被停止。该进程自动启动不需要进行调整,如果该进程失效则自动

重新启动。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值