Rac11gR2OnLinux

  1. Introduction

Note: This RAC Guide is not meant to replace or supplant the Oracle Documentation set, but rather, it is meant as a supplement to the same. It is imperative that the Oracle Documentation be read, understood, and referenced to provide answers to any questions that may not be clearly addressed by this Guide.

    1. Overview of new concepts in 11gR2 Grid Infrastructure

      1. SCAN

The single client access name (SCAN) is the address used by all clients connecting to the cluster. The SCAN name is a domain name registered to three IP addresses, either in the domain name service (DNS) or the Grid Naming Service (GNS). The SCAN name eliminates the need to change clients when nodes are added to or removed from the cluster. Clients using SCAN names can also access the cluster using EZCONNECT.

        • The Single Client Access Name (SCAN) is a domain name that resolves to all the addresses allocated for the SCAN name. Three IP addresses should be provided (in DNS) to use for SCAN name mapping as this ensures high availability. During Oracle Grid Infrastructure installation, listeners are created

for each of the SCAN addresses, and Oracle Grid Infrastructure controls which server responds to a SCAN address request.

        • The SCAN addresses need to be on the same subnet as the VIP addresses for nodes in the cluster.
        • The SCAN domain name must be unique within your corporate network.

      1. GNS

In the past, the host and VIP names and addresses were defined in the DNS or locally in a hosts file. GNS can simplify this setup by using DHCP. To use GNS, DHCP must be configured in the subdomain in which the cluster resides.

      1. OCR and Voting on ASM storage

The ability to use ASM (Automatic Storage Management) diskgroups for Clusterware OCR and Voting disks is a new feature in the Oracle Database 11g Release 2 Grid Infrastructure. If you choose this option and ASM is not yet configured, OUI launches ASM configuration assistant to configure ASM and a diskgroup.

      1. Passwordless Automatic SSH Connectivity

If SSH has not been configured prior to the Installation, you can prompt the installer to do this for you. The configuration can be tested as well.

      1. Intelligent Platform Management interface (IPMI)

Intelligent Platform Management Interface (IPMI) provides a set of common interfaces to computer hardware and firmware that administrators can use to monitor system health and manage the system.

With Oracle Database 11g Release 2, Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity. You must have the following hardware and software configured to

Rac11gR2OnLinux                                                                                                                                    1

enable cluster nodes to be managed with IPMI:

        • Each cluster member node requires a Baseboard Management Controller (BMC) running firmware compatible with IPMI version 1.5, which supports IPMI over LANs, and is configured for remote control.
        • Each cluster member node requires an IPMI driver installed on each node.
        • The cluster requires a management network for IPMI. This can be a shared network, but Oracle recommends that you configure a dedicated network.
        • Each cluster node's ethernet port used by BMC must be connected to the IPMI management network. If you intend to use IPMI, then you must provide an administration account username and password when

prompted during installation.

      1. Time Sync

Oracle Clusterware 11g release 2 (11.2) requires time synchronization across all nodes within a cluster when Oracle RAC is deployed. To achieve this you should have your OS configured network time protocol (NTP). The new Oracle Cluster Time Synchronization Service is designed for organizations whose Oracle RAC databases are unable to access NTP services.

      1. Clusterware and ASM share the same Oracle Home

The clusterware and ASM share the same home thus it is known as the Grid Infrastructure home (prior to 11gR2, ASM and RDBMS could be installed either in the same Oracle home or in separate Oracle homes).

      1. Hangchecktimer and oprocd are replaced

Oracle Clusterware 11g release 2 (11.2) replaces the oprocd and Hangcheck processes with the cluster synchronization service daemon Agent and Monitor to provide more accurate recognition of hangs and to avoid false termination.

      1. Rebootless Restart

The fencing mechanism has changed in 11gR2. Oracle Clusterware aims to achieve a node eviction without rebooting a node. CSSD starts a graceful shutdown mechanism after seeing a failure. Thereafter, OHASD will try to restart the stack. It is only if the cleanup (of a failed subcomponent) fails that the node is rebooted in order to perform a forced cleanup.

      1. HAIP

In 11.2.0.2 the new HAIP (redundant Interconnect) facility is active and multiple interface selection will support load balancing and failover. You can select more than 4 interfaces for private interconnect at install time or add them dynamically using oifcfg.

    1. System Requirements

  1. Hardware Requirements

    • Physical memory (at least 1.5 gigabyte (GB) of RAM)
    • An amount of swap space equal the amount of RAM for systems up to 32GB of RAM, for systems having greater than 32GB of RAM the requirement is 32GB of swap

1.1.5. Intelligent Platform Management interface (IPMI)                                                                             2

    • Temporary space (at least 1 GB) available in /tmp
    • A processor type (CPU) that is certified with the version of the Oracle software being installed
    • At minimum of 1024 x 786 display resolution, so that Oracle Universal Installer (OUI) displays correctly
    • All servers that will be used in the cluster have the same chip architecture, for example, all 32-bit processors or all 64-bit processors
    • Disk space for software installation locations: You will need at least 4.5 GB of available disk space

for the Grid Infrastructure home directory, which includes both the binary files for Oracle Clusterware and Oracle Automatic Storage Management (Oracle ASM) and their associated log files, and at least 4 GB of available disk space for the Oracle Database home directory.

    • Shared disk space: An Oracle RAC database is a shared everything database. All data files, control files, redo log files, and the server parameter file (SPFILE) used by the Oracle RAC database must reside on shared storage that is accessible by all Oracle RAC database instances. The Oracle RAC installation that is described in this guide uses Oracle ASM for the shared storage for Oracle Clusterware and Oracle Database files. The amount of shared disk space is determined by the size of your database.

  1. Network Hardware Requirements

    • Each node must have at least two network interface cards (NIC), or network adapters.
    • Public interface names must be the same for all nodes. If the public interface on one node uses the network adapter eth0, then you must configure eth0 as the public interface on all nodes.
    • Private interface names should be the same for all nodes as well. If eth1 is the private interface name for the first node, then eth1 should be the private interface name for your second node.
    • The network adapter for the public interface must support TCP/IP.
    • The network adapter for the private interface must support the user datagram protocol (UDP) using high-speed network adapters and a network switch that supports TCP/IP (Gigabit Ethernet or better).
    • For the private network, the end points of all designated interconnect interfaces must be completely reachable on the network. Every node in the cluster should be able to connect to every private network interface in the cluster.
    • The host name of each node must conform to the RFC 952 standard, which permits alphanumeric characters. Host names using underscores ("_") are not allowed.

  1. IP Address Requirements

    • One public IP address for each node
    • One virtual IP address for each node
    • Three single client access name (SCAN) addresses for the cluster

  1. Installation method

This document details the steps for installing a 3-node Oracle 11gR2 RAC cluster on Linux:

    • The Oracle Grid home binaries are installed on the local disk of each of the RAC nodes.
    • The files required by Oracle Clusterware (OCR and Voting disks) are stored in ASM
    • The installation is explained without GNS and IPMI (additional Information for installation with GNS and IPMI are explained)

      1. Hardware Requirements                                                                                                                    3

  1. Prepare the cluster nodes for Oracle RAC

The guides include hidden sections, use the     and     image for each section to show/hide the section or you can Expand all or Collapse all by clicking these buttons. This is implemented using the Twisty Plugin which requires Java Script to be enabled on your browser.

    1. User Accounts

NOTE: We recommend different users for the installation of the Grid Infrastructure (GI) and the Oracle RDBMS home. The GI will be installed in a separate Oracle base, owned by user 'grid.' After the grid install the GI home will be owned by root, and inaccessible to unauthorized users.

  1. Create OS groups using the command below. Enter these commands as the 'root' user: #/usr/sbin/groupadd -g 501 oinstall

#/usr/sbin/groupadd -g 502 dba #/usr/sbin/groupadd -g 504 asmadmin #/usr/sbin/groupadd -g 506 asmdba #/usr/sbin/groupadd -g 507 asmoper

  1. Create the users that will own the Oracle software using the commands: #/usr/sbin/useradd -u 501 -g oinstall -G asmadmin,asmdba,asmoper grid

#/usr/sbin/useradd -u 502 -g oinstall -G dba,asmdba oracle

  1. Set the password for the oracle account using the following command. Replace password with your own password.

passwd oracle

Changing password for user oracle. New UNIX password: password

retype new UNIX password: password

passwd: all authentication tokens updated successfully. passwd grid

Changing password for user oracle. New UNIX password: password

retype new UNIX password: password

passwd: all authentication tokens updated successfully.

  1. Repeat Step 1 through Step 3 on each node in your cluster.

    1. Networking

NOTE: This section is intended to be used for installations NOT using GNS.

  1. Determine your cluster name. The cluster name should satisfy the following conditions:

  1. Prepare the cluster nodes for Oracle RAC
 

4

-The cluster name is globally unique throughout your host domain.

-The cluster name is at least 1 character long and less than 15 characters long.

-The cluster name must consist of the same character set used for host names: single-byte alphanumeric characters (a to z, A to Z, and 0 to 9) and hyphens (-).

  1. Determine the public host name for each node in the cluster. For the public host name, use the primary host name of each node. In other words, use the name displayed by the hostname command for example: racnode1.

• It is recommended that redundant NICs are configured with the Linux bonding driver. Active/passive is the preferred bonding method due to its simplistic configuration.

  1. Determine the public virtual hostname for each node in the cluster. The virtual host name is a public node name that is used to reroute client requests sent to the node if the node is down. Oracle recommends that you provide a name in the format <public hostname>-vip, for example: racnode1-vip. The virutal hostname must meet the following requirements:

    • The virtual IP address and the network name must not be currently in use.
    • The virtual IP address must be on the same subnet as your public IP address.
    • The virtual host name for each node should be registered with your DNS.

  1. Determine the private hostname for each node in the cluster. This private hostname does not need to be resolvable through DNS and should be entered in the /etc/hosts file. A common naming convention for the private hostname is <public hostname>-pvt.

    • The private IP should NOT be accessable to servers not participating in the local cluster.
    • The private network should be on standalone dedicated switch(es).
    • The private network should NOT be part of a larger overall network topology.
    • The private network should be deployed on Gigabit Ethernet or better.
    • It is recommended that redundant NICs are configured with the Linux bonding driver. Active/passive is the preferred bonding method due to its simplistic configuration.

  1. Define a SCAN DNS name for the cluster that resolves to three IP addresses (round-robin). SCAN IPs must NOT be in the /etc/hosts file, the SCAN name must be resolved by DNS.

  1. Even if you are using a DNS, Oracle recommends that you add lines to the /etc/hosts file on each node, specifying the public IP, VIP and private addresses. Configure the

/etc/hosts file so that it is similar to the following example:

NOTE: The SCAN IPs MUST NOT be in the /etc/hosts file. This will result in only 1 SCAN IP for the entire cluster.

#eth0 - PUBLIC

192.0.2.100 racnode1.example.com racnode1

  1. racnode2.example.com racnode2 #VIP
  2. racnode1-vip.example.com racnode1-vip
  3. racnode2-vip.example.com racnode2-vip #eth1 - PRIVATE

172.0.2.100 racnode1-pvt 172.0.2.101 racnode2-pvt

  1. If you configured the IP addresses in a DNS server, then, as the root user, change the hosts search order in

/etc/nsswitch.conf on all nodes as shown here:

2.2. Networking                                                                                                                                        5

Old:

hosts: files nis dns New:

hosts: dns files nis

  1. After modifying the nsswitch.conf file, restart the nscd daemon on each node using the following command:

# /sbin/service nscd restart

After you have completed the installation process, configure clients to use the SCAN to access the cluster. Using the previous example, the clients would use docrac-scan to connect to the cluster.

The fully qualified SCAN for the cluster defaults to cluster_name-scan.GNS_subdomain_name, for example docrac-scan.example.com. The short SCAN

for the cluster is docrac-scan. You can use any name for the SCAN, as long as it is unique within your network and conforms to the RFC 952 standard.

    1. Synchronizing the Time on ALL Nodes

Ensure that the date and time settings on all nodes are set as closely as possible to the same date and time. Time may be kept in sync with NTP with the -x option or by using Oracle Cluster Time Synchronization Service (ctssd). Instructions on configuring NTP with the -x option can be found in My Oracle Support ExtNote:551704.1.

    1. Configuring Kernel Parameters

  1. As the root user add the following kernel parameter settings to /etc/sysctl.conf. If any of the parameters are already in the /etc/sysctl.conf file, the higher of the 2 values should be used.

kernel.shmmni = 4096 kernel.sem = 250 32000 100 128 fs.file-max = 6553600

net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576

NOTE: The latest information on kernel parameter settings for Linux can be found in My Oracle Support ExtNote:169706.1.

  1. Run the following as the root user to allow the new kernel parameters to be put in place: #/sbin/sysctl -p
  2. Repeat steps 1 and 2 on all cluster nodes.

NOTE: OUI checks the current settings for various kernel parameters to ensure they meet the minimum requirements for deploying Oracle RAC.

2.3. Synchronizing the Time on ALL Nodes                                                                                               6

    1. Set shell limits for the oracle user

To improve the performance of the software on Linux systems, you must increase the shell limits for the oracle user

  1. Add the following lines to the /etc/security/limits.conf file: grid soft nproc 2047

grid hard nproc 16384 grid soft nofile 1024 grid hard nofile 65536 oracle soft nproc 2047

oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536

  1. Add or edit the following line in the /etc/pam.d/login file, if it does not already exist: session required pam_limits.so

  1. Make the following changes to the default shell startup file, add the following lines to the /etc/profile file: if [ $USER = "oracle" ] || [ $USER = "grid" ]; then

if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384

ulimit -n 65536 else

ulimit -u 16384 -n 65536 fi

umask 022 fi

For the C shell (csh or tcsh), add the following lines to the /etc/csh.login file: if ( $USER = "oracle" || $USER = "grid" ) then

limit maxproc 16384 limit descriptors 65536 endif

  1. Repeat this procedure on all other nodes in the cluster.

    1. Create the Oracle Inventory Directory

To create the Oracle Inventory directory, enter the following commands as the root user: # mkdir -p /u01/app/oraInventory

# chown -R grid:oinstall /u01/app/oraInventory # chmod -R 775 /u01/app/oraInventory

2.5. Set shell limits for the oracle user

 

7

    1. Creating the Oracle Grid Infrastructure Home Directory

To create the Grid Infrastructure home directory, enter the following commands as the root user: # mkdir -p /u01/11.2.0/grid

# chown -R grid:oinstall /u01/11.2.0/grid # chmod -R 775 /u01/11.2.0/grid

    1. Creating the Oracle Base Directory

To create the Oracle Base directory, enter the following commands as the root user: # mkdir -p /u01/app/oracle

# mkdir /u01/app/oracle/cfgtoollogs --needed to ensure that dbca is able to run after the rdbms installation. # chown -R oracle:oinstall /u01/app/oracle

# chmod -R 775 /u01/app/oracle

    1. Creating the Oracle RDBMS Home Directory

To create the Oracle RDBMS Home directory, enter the following commands as the root user: # mkdir -p /u01/app/oracle/product/11.2.0/db_1

# chown -R oracle:oinstall /u01/app/oracle/product/11.2.0/db_1 # chmod -R 775 /u01/app/oracle/product/11.2.0/db_1

    1. Stage the Oracle Software

It is recommended that you stage the required software onto a local drive on Node 1 of your cluster. Ensure that you use only 32 bit versions of the Oracle Software on 32bit operating systems and 64 bit versions of the Oracle Software on 64bit operating systems.

Starting with the first patch set for Oracle Database 11g Release 2 (11.2.0.2), Oracle Database patch sets are full installations of the Oracle Database software. In past releases, Oracle Database patch sets consisted of sets of files that replaced files in an existing Oracle home. Beginning with Oracle Database 11g Release 2, patch sets are full (out-of-place) installations that replace existing installations. This simplifies the installation since you may simply install the latest patch set (version). You are no longer required to install the base release, and then apply the patch set. The 11.2.0.2.2 Patch Set is available for download via My Oracle Support under Patch 10098816. Reference My Oracle Support ExtNote:1189783.1 for more information on 'Important Changes to Oracle Database Patch Sets Starting With 11.2.0.2.'

It is highly recommended that the latest Grid Infrastructure Patch Set Update (PSU) be installed prior to running root.sh (or rootupgrade.sh). At the time of this writing the latest Grid Infrastructure PSU is 11.2.0.2.2 (GI PSU #2), therefore the content provided in this RAC Guide will demonstrate the installation of GI 11.2.0.2.2 to the Grid Infrastructure home prior to running root.sh on each node in the cluster. The 11.2.0.2.2 GI PSU can be found under Patch 12311357 on My Oracle Support. Information on the latest PSUs for 11.2.0.2 can be found under My Oracle Support ExtNote:756671.1.

    1. Check OS Software Requirements

The OUI will check for missing packages during the install and you will have the opportunity to install them at that point during the prechecks. Nevertheless you might want to validate that all required packages have been installed prior to launching the OUI.

2.7. Creating the Oracle Grid Infrastructure Home Directory                                                                       8

NOTE: These Requirements are for 64-bit versions of Oracle Enterprise Linux 5 and RedHat? Enterprise Linux 5. Requirements for other supported platforms can be found in My Oracle Support ExtNote:169706.1.

binutils-2.15.92.0.2

compat-libstdc++-33-3.2.3

compat-libstdc++-33-3.2.3 (32 bit) elfutils-libelf-0.97

elfutils-libelf-devel-0.97 expat-1.95.7

gcc-3.4.6

gcc-c++-3.4.6 glibc-2.3.4-2.41

glibc-2.3.4-2.41 (32 bit) glibc-common-2.3.4 glibc-devel-2.3.4

glibc-headers-2.3.4 libaio-0.3.105

libaio-0.3.105 (32 bit) libaio-devel-0.3.105

libaio-devel-0.3.105 (32 bit) libgcc-3.4.6

libgcc-3.4.6 (32-bit) libstdc++-3.4.6 libstdc++-3.4.6 (32 bit) libstdc++-devel 3.4.6 make-3.80

pdksh-5.2.14 sysstat-5.0.5 unixODBC-2.2.11

unixODBC-2.2.11 (32 bit) unixODBC-devel-2.2.11 unixODBC-devel-2.2.11 (32 bit)

The following command can be run on the system to list the currently installed packages: rpm -q --qf '%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n' binutils \

compat-libstdc++-33 \ elfutils-libelf \

elfutils-libelf-devel \ gcc \

gcc-c++ \ glibc \

glibc-common \ glibc-devel \ glibc-headers \ ksh \

libaio \

libaio-devel \ libgcc \ libstdc++ \

libstdc++-devel \ make \

sysstat \ unixODBC \

2.11. Check OS Software Requirements

 

9

unixODBC-devel

Any missing RPM from the list above should be added using the "--aid" of "/bin/rpm" option to ensure all dependent packages are resolved and installed as well.

NOTE: Be sure to check on all nodes that the Linux Firewall and SE Linux is disabled.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

秒变学霸的18岁码农

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值