Oracle 11g RAC 搭建详细步骤

Oracle RAC 搭建步骤详解

前期准备

数据库:11.2.0.4

OS:Centos 6.8

IP分配:

#publice ip

192.168.180.2 rac1

192.168.180.3 rac2

#private ip

10.10.10.2  rac1-priv

10.10.10.3  rac2-priv

#vip

192.168.180.4 rac1-vip

192.168.180.5 rac2-vip

#scan ip

192.168.180.6 rac-scan

RAC 存储配置:

   OCR_VOTING  3个 4G

   DATA         1个 50G

   FRA_ARC      1个 20G

1. 配置udev(所有节点)

关于RAC 共享存储可参考:http://blog.csdn.net/shiyu1157758655/article/details/56837550

用如下脚本获取绑定脚本:注意这里的 c d e f g是要作为共享的存储的磁盘,要根据自己实际情况修改。

for i in c d e f g ;

do

echo "KERNEL==\"sd*\", BUS==\"scsi\",PROGRAM==\"/sbin/scsi_id --whitelisted --replace-whitespace--device=/dev/\$name\", RESULT==\"`/sbin/scsi_id --whitelisted--replace-whitespace --device=/dev/sd$i`\", NAME=\"asm-disk$i\",OWNER=\"grid\", GROUP=\"asmadmin\",MODE=\"0660\""     >> /etc/udev/rules.d/99-oracle-asmdevices.rules

done

重启start_udev

2. 添加组和用户(所有节点)

groupadd -g 1000 oinstall

groupadd -g 1200 asmadmin

groupadd -g 1201 asmdba

groupadd -g 1202 asmoper

groupadd -g 1300 dba

groupadd -g 1301 oper

useradd -m -u 1100 -g oinstall -Gasmadmin,asmdba,asmoper,dba -d /home/grid-s /bin/bash grid

useradd -m -u 1101 -g oinstall -Gdba,oper,asmdba -d /home/oracle -s/bin/bash oracle

--将用户grid添加到dba组:

[root@rac1 app]# gpasswd -a grid dba

Adding user grid to group dba

--确认用户信息:

[root@rac1 ~]# id oracle

uid=502(oracle)gid=507(oinstall)groups=507(oinstall),502(dba),503(oper),506(asmdba)

[root@rac1 ~]# id grid

uid=1100(grid) gid=507(oinstall)groups=507(oinstall),504(asmadmin),506(asmdba),505(asmoper)

--修改密码:

passwd oracle

passwd grid

3.  禁用防火墙和SELNUX(所有节点)

关闭防火墙:

service iptables status

service iptables stop

chkconfig iptables off

chkconfig iptables –list    

设置/etc/selinux/config 文件,将SELINUX设置为disabled。

[root@rac1 ~]# cat/etc/selinux/config

# This file controls the state of SELinux on the system.

# SELINUX= can take one of these three values:

#     enforcing - SELinux securitypolicy is enforced.

#     permissive - SELinux printswarnings instead of enforcing.

#     disabled - No SELinux policyis loaded.

SELINUX=disabled

# SELINUXTYPE= can take one of these two values:

#     targeted - Targeted processesare protected,

#     mls - Multi Level Securityprotection.

SELINUXTYPE=targeted

4.  配置时间同步(所有节点)

这里我们使用CTSS.所以要停用 NTP 服务,并从初始化序列中禁用该服务,并删除 ntp.conf 文件。以 root 用户身份在两个 OracleRAC 节点上运行以下命令:

[root@rac1 ~]# /sbin/service ntpd stop
Shutting down ntpd: [ OK ]
[root@rac1 ~]# chkconfig ntpd off
[root@rac1 ~]# mv /etc/ntp.conf/etc/ntp.conf.original
[root@rac1 ~]# chkconfig ntpd --list
ntpd 0:off 1:off 2:off 3:off 4:off 5:off 6:off

还要删除以下文件:

rm /var/run/ntpd.pid

此文件保存了 NTP 后台程序的 pid

5.  创建目录结构(所有节点)

Inventory 目录:/u01/app/oraInventory

ORACLE BASE 目录:/u01/app/oracle

Grid Infrastructure HOME 目录:/u01/app/grid/product/11.2.0/grid_1

RDBMS HOME 目录:/u01/app/oracle/product/11.2.0/db_1

配置相关目录的属主及权限

Inventory 目录:属主,grid:oinstall;权限,775

ORACLE BASE 目录:属主,oracle:oinstall;权限,775

Grid Infrastructure HOME 目录:属主,grid:oinstall;权限,775

RDBMS HOME 目录:属主,oracle:oinstall;权限,775

mkdir -p/u01/app/oraInventory

mkdir -p/u01/app/oracle

mkdir -p/u01/app/grid/product/11.2.0/grid_1

mkdir -p/u01/app/oracle/product/11.2.0/db_1

[root@rac1 ~]#chown -R grid:oinstall /u01/app/oraInventory/

[root@rac1 ~]#chown -R oracle:oinstall /u01/app/oracle/

[root@rac1 ~]#chown -R grid:oinstall /u01/app/grid/product/12.1.0/grid_1/

[root@rac1 ~]#chown -R oracle:oinstall /u01/app/oracle/product/12.1.0/db_1/

6.  配置主机/etc/hosts 别名(所有节点)

如果需要没有计划采用 DNS 服务器,需要在服务器本地配置服务器主机名与 IP 地址的映射

关系。具体涉及的主要配置文件为:

/etc/hosts

该配置文件主要的配置信息建议如下

    127.0.0.1   localhost  //记住这个一定要加上

#public ip

192.168.180.2  rac1

192.168.180.3  rac2

#private ip

10.10.10.2     rac1-priv

10.10.10.3     rac2-priv

#vip

192.168.180.4  rac1-vip

192.168.180.5  rac2-vip

#scan ip

192.168.180.6  rac-scan

注意:除了本机主机名映射地址条目外,两个节点的/etc/hosts 配置文件需要一致。上述建议

信息中:public ip、virtual ip、scan 为提供服务的 ip;privateip 为心跳私有 ip

7.  配置节点互信机制(所有节点)

需要将 grid 用户、oracle 用户配置好集群节点之间的互信机制。配置集群节点互信机制建议

采用ssh 协议。主要涉及的配置文件有:

$HOME/.ssh/authorized_keys

注意:此文件需要手工创建。

具体可参考:http://blog.csdn.net/shiyu1157758655/article/details/56838603

8.  配置环境变量(所有节点)

需要对 grid 用户、oracle 用户配置好环境变量。

grid 用户的环境变量配置建议如下:

umask 022

PS1='$ORACLE_SID'":"'$PWD'"@"`hostname`">"

ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE

ORACLE_HOME=/u01/app/grid/product/11.2.0/grid_1; export ORACLE_HOME

ORACLE_SID=+ASM1; export ORACLE_SID

ORACLE_TERM=xterm; export ORACLE_TERM

TMPDIR=/var/tmp; export TMPDIR

NLS_DATE_FORMAT="YYYY/MM/DD hh24:mi:ss"; export NLS_DATE_FORMAT

ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data; export ORA_NLS33

TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN DISPLAY

LIBPATH=$ORACLE_HOME/lib; export LIBPATH

PATH=$PATH:$ORACLE_HOME/bin:/usr/sbin; export PATH

Oracle用户的环境变量配置建议如下:

umask 022

PS1='$ORACLE_SID'":"'$PWD'"@"`hostname`">"

ORACLE_BASE=/u01/app/oracle_base; export ORACLE_BASE

ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1; export ORACLE_HOME

ORACLE_SID=rac1; export ORACLE_SID

ORACLE_TERM=xterm; export ORACLE_TERM

TMPDIR=/var/tmp; export TMPDIR

NLS_DATE_FORMAT="YYYY/MM/DD hh24:mi:ss"; export NLS_DATE_FORMAT

ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data; export ORA_NLS33

TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN DISPLAY

LIBPATH=$ORACLE_HOME/lib; export LIBPATH

PATH=$PATH:$ORACLE_HOME/bin:/usr/sbin; export PATH

9.  修改/etc/security/limits.conf(所有节点)

以 root 用户身份,在每个 OracleRAC 节点上,在 /etc/security/limits.conf 文件中添加如下内容,或者执行执行如下命令:

root@rac1 ~]# cat >> /etc/security/limits.conf <<EOF
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
EOF

10. 修改/etc/pam.d/login(所有节点)

在每个 OracleRAC 节点上,在/etc/pam.d/login 文件中添加或编辑下面一行内容:

[root@rac1 ~]# cat >> /etc/pam.d/login <<EOF
session required pam_limits.so
EOF

11. 配置系统默认 profile (所有节点)

确认以下配置加载到系统的默认 profile 中。具体涉及到的主要配置文件如下:

if [ $USER = "oracle" ] || [ $USER = "grid" ]; then

if [ $SHELL = "/bin/ksh" ]; then

  ulimit -p 16384

  ulimit -n 65536

else

  ulimit -u 16384 -n 65536

fi

 umask 022

fi

12. 修改/etc/sysctl.conf(所有节点)

#vi /etc/sysctl.conf

kernel.shmmax = 4294967295

kernel.shmall = 2097152

kernel.shmmni = 4096

kernel.sem = 250 32000 100 128

fs.file-max = 6815744

net.ipv4.ip_local_port_range = 9000 65500

net.core.rmem_default=262144

net.core.rmem_max=4194304

net.core.wmem_default=262144 

net.core.wmem_max=1048576

fs.aio-max-nr=1048576

kernel.panic_on_oops=1

使修改的参数生效:

[root@rac1 ~]# sysctl -p

13. 安装前检查

+ASM1:/src/oracle/grid@rac1>./runcluvfy.sh stage -pre crsinst -nrac1,rac2

Performing pre-checks for cluster services setup

Checking node reachability...

Node reachability check passed from node "rac1"

 

Checking user equivalence...

User equivalence check passed for user "grid"

 

Checking node connectivity...

 

Checking hosts config file...

 

Verification of the hosts config file successful

 

Node connectivity passed for subnet "192.168.180.0" with node(s)rac2,rac1

TCP connectivity check passed for subnet "192.168.180.0"

 

Node connectivity passed for subnet "10.10.10.0" with node(s)rac2,rac1

TCP connectivity check passed for subnet "10.10.10.0"

 

 

Interfaces found on subnet "192.168.180.0" that are likelycandidates for VIP are:

rac2 eth0:192.168.180.3

rac1 eth0:192.168.180.2

 

Interfaces found on subnet "10.10.10.0" that are likelycandidates for a private interconnect are:

rac2 eth1:10.10.10.3

rac1 eth1:10.10.10.2

Checking subnet mask consistency...

Subnet mask consistency check passed for subnet "192.168.180.0".

Subnet mask consistency check passed for subnet "10.10.10.0".

Subnet mask consistency check passed.

 

Node connectivity check passed

 

Checking multicast communication...

 

Checking subnet "192.168.180.0" for multicast communication withmulticast group "230.0.1.0"...

Check of subnet "192.168.180.0" for multicast communication withmulticast group "230.0.1.0" passed.

 

Checking subnet "10.10.10.0" for multicast communication withmulticast group "230.0.1.0"...

Check of subnet "10.10.10.0" for multicast communication withmulticast group "230.0.1.0" passed.

 

Check of multicast communication passed.

 

Checking ASMLib configuration.

Check for ASMLib configuration passed.

Total memory check passed

Available memory check passed

Swap space check passed

Free disk space check passed for "rac2:/var/tmp"

Free disk space check passed for "rac1:/var/tmp"

Check for multiple users with UID value 1100 passed

User existence check passed for "grid"

Group existence check passed for "oinstall"

Group existence check passed for "dba"

Membership check for user "grid" in group "oinstall"[as Primary] passed

Membership check for user "grid" in group "dba" passed

Run level check passed

Hard limits check passed for "maximum open file descriptors"

Soft limits check passed for "maximum open file descriptors"

Hard limits check passed for "maximum user processes"

Soft limits check passed for "maximum user processes"

System architecture check passed

Kernel version check passed

Kernel parameter check passed for "semmsl"

Kernel parameter check passed for "semmns"

Kernel parameter check passed for "semopm"

Kernel parameter check passed for "semmni"

Kernel parameter check passed for "shmmax"

Kernel parameter check passed for "shmmni"

Kernel parameter check passed for "shmall"

Kernel parameter check passed for "file-max"

Kernel parameter check passed for "ip_local_port_range"

Kernel parameter check passed for "rmem_default"

Kernel parameter check passed for "rmem_max"

Kernel parameter check passed for "wmem_default"

Kernel parameter check passed for "wmem_max"

Kernel parameter check passed for "aio-max-nr"

Package existence check passed for "make"

Package existence check passed for "binutils"

Package existence check passed for "gcc(x86_64)"

Package existence check passed for "libaio(x86_64)"

Package existence check passed for "glibc(x86_64)"

Package existence check passed for "compat-libstdc++-33(x86_64)"

Package existence check passed for "elfutils-libelf(x86_64)"

Package existence check failed for"elfutils-libelf-devel"

Check failed on nodes:

rac2,rac1

Package existence check passed for "glibc-common"

Package existence check passed for "glibc-devel(x86_64)"

Package existence check passed for "glibc-headers"

Package existence check passed for "gcc-c++(x86_64)"

Package existence check passed for "libaio-devel(x86_64)"

Package existence check passed for "libgcc(x86_64)"

Package existence check passed for "libstdc++(x86_64)"

Package existence check passed for "libstdc++-devel(x86_64)"

Package existence check passed for "sysstat"

Package existence check passed for "pdksh"

Package existence check passed for "expat(x86_64)"

Check for multiple users with UID value 0 passed

Current group ID check passed

 

Starting check for consistency of primary group of root user

 

Check for consistency of root user's primary group passed

 

Starting Clock synchronization checks using Network Time Protocol(NTP)...

 

NTP Configuration file check started...

No NTP Daemons or Services were found to be running

 

Clock synchronization check using Network Time Protocol(NTP) passed

 

Core file name pattern consistency check passed.

 

User "grid" is not part of "root" group. Check passed

Default user file creation mask check passed

Checking consistency of file "/etc/resolv.conf" across nodes

 

File "/etc/resolv.conf" does not have both domain and searchentries defined

domain entry in file "/etc/resolv.conf" is consistent acrossnodes

search entry in file "/etc/resolv.conf" is consistent acrossnodes

PRVF-5636 : The DNS response time for an unreachable nodeexceeded "15000" ms on following nodes: rac2,rac1

 

File "/etc/resolv.conf" is not consistent across nodes

 

Time zone consistency check passed

 

Pre-check for cluster services setup was unsuccessful on all the nodes.

   要根据上面的检查去进行排查解决,如果没有配置DNS的话,

PRVF-5636 : The DNS response time for an unreachable nodeexceeded "15000" ms on following nodes: rac2,rac1

这个可以忽略。

14.安装grid

在节点1,用grid用户运行runInstaller。

 

[root@rac1app]# chown -R grid:oinstall oracle  //2个节点都要执行

 

 

 

 

上面2个可以忽略

 

 

 

 

 

节点1上执行:

 

[root@rac1~]# /u01/app/oraInventory/orainstRoot.sh

Changing permissions of /u01/app/oraInventory.

Adding read,write permissions for group.

Removing read,write,execute permissions for world.

 

Changing groupname of /u01/app/oraInventory to oinstall.

The execution of the script is complete.

 

节点2上执行:

[root@rac2~]#/u01/app/oraInventory/orainstRoot.sh

Changingpermissions of /u01/app/oraInventory.

Addingread,write permissions for group.

Removingread,write,execute permissions for world.

 

Changinggroupname of /u01/app/oraInventory to oinstall.

Theexecution of the script is complete.

 

节点1上执行:

[root@rac1~]# /u01/app/grid/product/11.2.0/grid_1/root.sh

Performingroot user operation for Oracle 11g

 

Thefollowing environment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME=  /u01/app/grid/product/11.2.0/grid_1

 

Enterthe full pathname of the local bin directory: [/usr/local/bin]:

   Copying dbhome to /usr/local/bin ...

   Copying oraenv to /usr/local/bin ...

   Copying coraenv to /usr/local/bin ...

 

 

Creating/etc/oratab file...

Entrieswill be added to the /etc/oratab file as needed by

DatabaseConfiguration Assistant when a database is created

Finishedrunning generic part of root script.

Nowproduct-specific root actions will be performed.

Usingconfiguration parameter file:/u01/app/grid/product/11.2.0/grid_1/crs/install/crsconfig_params

Creatingtrace directory

Userignored Prerequisites during installation

InstallingTrace File Analyzer

Failedto create keys in the OLR, rc = 127, Message:

 shared object file: No such file or directory

 

Failedto create keys in the OLR at/u01/app/grid/product/11.2.0/grid_1/crs/install/crsconfig_lib.pm line 7660.

duct/11.2.0/grid_1/crs/install/u01/app/grid/product/11.2.0/grid_1/crs/install/rootcrs.pl execution failed

 

这个报错是缺少包造成的,在2个节点上都安装这个包就行了,具体可参考:http://blog.csdn.net/shiyu1157758655/article/details/59486625

 

[root@rac1 ~]# rpm -ivh/os/Packages/compat-libcap1-1.10-1.x86_64.rpm

warning:/os/Packages/compat-libcap1-1.10-1.x86_64.rpm: Header V3 RSA/SHA256 Signature,key ID c105b9de: NOKEY

Preparing...               ########################################### [100%]

   1:compat-libcap1        ########################################### [100%]

再次执行:

[root@rac1~]# /u01/app/grid/product/11.2.0/grid_1/root.sh

Performingroot user operation for Oracle 11g

 

Thefollowing environment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME=  /u01/app/grid/product/11.2.0/grid_1

 

Enterthe full pathname of the local bin directory: [/usr/local/bin]:

Thecontents of "dbhome" have not changed. No need to overwrite.

Thecontents of "oraenv" have not changed. No need to overwrite.

Thecontents of "coraenv" have not changed. No need to overwrite.

 

Entrieswill be added to the /etc/oratab file as needed by

DatabaseConfiguration Assistant when a database is created

Finishedrunning generic part of root script.

Nowproduct-specific root actions will be performed.

Usingconfiguration parameter file:/u01/app/grid/product/11.2.0/grid_1/crs/install/crsconfig_params

Userignored Prerequisites during installation

InstallingTrace File Analyzer

OLRinitialization - successful

  root wallet

  root wallet cert

  root cert export

  peer wallet

  profile reader wallet

  pa wallet

  peer wallet keys

  pa wallet keys

  peer cert request

  pa cert request

  peer cert

  pa cert

  peer root cert TP

  profile reader root cert TP

  pa root cert TP

  peer pa cert TP

  pa peer cert TP

  profile reader pa cert TP

  profile reader peer cert TP

  peer user cert

  pa user cert

AddingClusterware entries to upstart

CRS-2672:Attempting to start 'ora.mdnsd' on 'rac1'

CRS-2676:Start of 'ora.mdnsd' on 'rac1' succeeded

CRS-2672:Attempting to start 'ora.gpnpd' on 'rac1'

CRS-2676:Start of 'ora.gpnpd' on 'rac1' succeeded

CRS-2672:Attempting to start 'ora.cssdmonitor' on 'rac1'

CRS-2672:Attempting to start 'ora.gipcd' on 'rac1'

CRS-2676:Start of 'ora.gipcd' on 'rac1' succeeded

CRS-2676:Start of 'ora.cssdmonitor' on 'rac1' succeeded

CRS-2672:Attempting to start 'ora.cssd' on 'rac1'

CRS-2672:Attempting to start 'ora.diskmon' on 'rac1'

CRS-2676:Start of 'ora.diskmon' on 'rac1' succeeded

CRS-2676:Start of 'ora.cssd' on 'rac1' succeeded

 

ASMcreated and started successfully.

 

DiskGroup OCR_VOTING created successfully.

 

clscfg:-install mode specified

Successfullyaccumulated necessary OCR keys.

CreatingOCR keys for user 'root', privgrp 'root'..

Operationsuccessful.

CRS-4256:Updating the profile

Successfuladdition of voting disk 5ddc4ea587f04f18bfb066d1d3ff07d9.

Successfuladdition of voting disk 64611da725794f0ebf206204283eff9a.

Successfuladdition of voting disk 46a2f2c1a5a14f4fbf58e6505f889674.

Successfullyreplaced voting disk group with +OCR_VOTING.

CRS-4256:Updating the profile

CRS-4266:Voting file(s) successfully replaced

##  STATE   File Universal Id               File Name Disk group

--  -----   -----------------               --------- ---------

 1. ONLINE  5ddc4ea587f04f18bfb066d1d3ff07d9 (/dev/asm-diskc) [OCR_VOTING]

 2. ONLINE  64611da725794f0ebf206204283eff9a (/dev/asm-diskd) [OCR_VOTING]

 3. ONLINE  46a2f2c1a5a14f4fbf58e6505f889674 (/dev/asm-diske) [OCR_VOTING]

Located3 voting disk(s).

CRS-2672:Attempting to start 'ora.asm' on 'rac1'

CRS-2676:Start of 'ora.asm' on 'rac1' succeeded

CRS-2672:Attempting to start 'ora.OCR_VOTING.dg' on 'rac1'

CRS-2676:Start of 'ora.OCR_VOTING.dg' on 'rac1' succeeded

ConfigureOracle Grid Infrastructure for a Cluster ... succeeded

节点2上执行:

[root@rac2~]# /u01/app/grid/product/11.2.0/grid_1/root.sh

Performingroot user operation for Oracle 11g

 

Thefollowing environment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME=  /u01/app/grid/product/11.2.0/grid_1

 

Enterthe full pathname of the local bin directory: [/usr/local/bin]:

   Copying dbhome to /usr/local/bin ...

   Copying oraenv to /usr/local/bin ...

   Copying coraenv to /usr/local/bin ...

 

 

Creating/etc/oratab file...

Entrieswill be added to the /etc/oratab file as needed by

DatabaseConfiguration Assistant when a database is created

Finishedrunning generic part of root script.

Nowproduct-specific root actions will be performed.

Usingconfiguration parameter file:/u01/app/grid/product/11.2.0/grid_1/crs/install/crsconfig_params

Creatingtrace directory

Userignored Prerequisites during installation

InstallingTrace File Analyzer

OLRinitialization - successful

AddingClusterware entries to upstart

 terminating

Anactive cluster was found during exclusive startup, restarting to join thecluster

ConfigureOracle Grid Infrastructure for a Cluster ... succeeded

 

点击ok继续安装:

 

这里有个报错,我选择忽略掉,貌似没啥影响。

到此grid已经安装成功。

 

15. 创建ASM磁盘组

以grid用户登陆,执行asmca

选择【create】

 

16. 安装Oracle RDBMS软件

在创建完磁盘组之后,即可安装RDBMS软件

注意:安装RDBMS软件,需要使用oracle用户创建

./runInstaller

 

这里仅安装软件

 

忽略上面2个报错

点击安装即可

 

17. 创建数据库

以oracle用户执行dbca

 

 

点击执行即可

18. 总结

到此oracle11g rac 安装完成,期间遇到的各种问题,需要耐心的一个个解决。

  • 4
    点赞
  • 29
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
RAC是一个完整的集群应用环境,它不仅实现了集群的功能,而且提供了运行在集群之上的应用程序,即Oracle数据库。无论与普通的集群相比,还是与普通的oracle数据库相比,RAC都有一些独特之处。 RAC由至少两个节点组成,节点之间通过公共网络和私有网络连接,其中私有网络的功能是实现节点之间的通信,而公共网络的功能是提供用户的访问。在每个节点上分别运行一个Oracle数据库实例和一个监听器,分别监听一个IP地址上的用户请求,这个地址称为VIP(Virtual IP)。用户可以向任何一个VIP所在的数据库服务器发出请求,通过任何一个数据库实例访问数据库。Clusterware负责监视每个节点的状态,如果发现某个节点出现故障,便把这个节点上的数据库实例和它所对应的VIP以及其他资源切换到另外一个节点上,这样可以保证用户仍然可通过这个VIP访问数据库。 在普通的Oracle数据库中,一个数据库实例只能访问一个数据库,而一个数据库只能被一个数据库实例打开。在RAC环境中,多个数据库实例同访问同一个数据库,每个数据库实例分别在不同的节点上运行,而数据库存放在共享的存储设备上。 通过RAC,不仅可以实现数据库的并发访问,而且可以实现用户访问的负载均衡。用户可以通过任何一个数据库实例访问数据库,实例之间通过内部通信来保证事务的一致性。例如,当用户在一个实例修改数据,需要对数据加锁。当另一个用户在其他实例中修改同样的数据,便需要等待锁的释放。当前一个用户提交事务,后一个用户立即可以得到修改之后的数据。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值