oracle both from,Oracle RAC 11g Release 1 on Linux(三)

Most of the configuration procedures in this section should be performed on both Oracle RAC nodes in the cluster! Creating the OCFS2 filesystem, however, should only be executed on one of nodes in the RAC cluster.

OverviewIt is now time to install the Oracle Cluster File System, Release 2 (OCFS2). OCFS2, developed by Oracle Corporation, is a Cluster File System which allows all nodes in a cluster to concurrently access a device via the standard file system interface. This allows for easy management of applications that need to run across a cluster.

OCFS (Release 1) was released in December 2002 to enable Oracle Real Application Cluster (RAC) users to run the clustered database without having to deal with RAW devices. The file system was designed to store database related files, such as data files, control files, redo logs, archive logs, etc. OCFS2 is the next generation of the Oracle Cluster File System. It has been designed to be a general purpose cluster file system. With it, one can store not only database related files on a shared disk, but also store Oracle binaries and configuration files (shared Oracle Home) making management of RAC even easier.

In this article, I will be using the latest release of OCFS2 ( at the time of this writing)to store the two files that are required to be shared by the Oracle Clusterware software. Along with these two files, I will also be using this space to store the shared ASM SPFILE for all Oracle RAC instances.

See the following page for more information on OCFS2 (including Installation Notes) for Linux:

Download OCFS2First, let's download the latest OCFS2 distribution. The OCFS2 distribution comprises of two sets of RPMs; namely, the kernel module and the tools. The latest kernel module is available for download from and the tools from .

Download the appropriate RPMs starting with the latest OCFS2 kernel module (the driver). With CentOS 5.1, I am using kernel release 2.6.18-53.el5. The appropriate OCFS2 kernel module was found in the latest release of OCFS2 at the time of this writing (). The available OCFS2 kernel modules for Linux kernel 2.6.18-53.el5 are listed below. Always download the latest OCFS2 kernel module that matches the distribution, platform, kernel version and the kernel flavor (default kernel, PAE kernel, or xen kernel).   - (for default kernel)

  - (for PAE kernel)

  - (for xen kernel)For the tools, simply match the platform and distribution. You should download both the OCFS2 tools and the OCFS2 console applications.

  - (OCFS2 tools)

  - (OCFS2 console)

The OCFS2 Console is optional but highly recommended. The ocfs2console application requires e2fsprogs, glib2 2.5-12 or later, vte 0.14 or later, pygtk2 (EL5) or python-gtk (SLES9) 1.99.16 or later, python 2.4 or later and ocfs2-tools.

If you were curious as to which OCFS2 driver release you need, use the OCFS2 release that matches your kernel version. To determine your kernel release: $ uname -a

Linux linux1 2.6.18-53.el5 #1 SMP Mon Nov 12 02:22:48 EST 2007 i686 i686 i386 GNU/Linux

Install OCFS2

I will be installing the OCFS2 files onto two - single processor machines. The installation process is simply a matter of running the following command on both Oracle RAC nodes in the cluster as the root user account: $ su -

# rpm -Uvh ocfs2-2.6.18-53.el5-1.2.7-1.el5.i686.rpm \

ocfs2console-1.2.7-1.el5.i386.rpm \

ocfs2-tools-1.2.7-1.el5.i386.rpm

Preparing... ########################################### [100%]

1:ocfs2-tools ########################################### [ 33%]

2:ocfs2-2.6.18-53.el5 ########################################### [ 67%]

3:ocfs2console ########################################### [100%]

Disable SELinux (RHEL4 U2 and higher)Users of RHEL4 U2 and higher (CentOS 5.1 is based on RHEL5 U1) are advised that OCFS2 currently does not work with SELinux enabled. If you are using RHEL4 U2 or higher (which includes us since we are using CentOS 5.1) you will need to disable SELinux (using tool system-config-securitylevel) to get the O2CB service to execute.

A ticket has been logged with Red Hat on this issue.

During the installation of CentOS, we Disabled SELinux on the SELinux screen. If, however, you did not disable SELinux during the installation phase, (or if you simply want to verify it is truly disable), you can use the GUI utility system-config-securitylevel to disable SELinux.# /usr/bin/system-config-securitylevel &

This will bring up the following screen:

Figure 13: Security Level Configuration Opening Screen

Now, click the SELinux tab and select the "Disabled" option. After clicking [OK], you will be presented with a warning dialog. Simply acknowledge this warning by clicking "Yes". Your screen should now look like the following after disabling the SELinux option:

Figure 14: SELinux Disabled

If you needed to disable SELinux in this section on any of the nodes, those nodes will need to be rebooted to implement the change. SELinux must be disabled before you can continue with configuring OCFS2!# init 6

Configure OCFS2The next step is to generate and configure the /etc/ocfs2/cluster.conf file on both Oracle RAC nodes in the cluster. The easiest way to accomplish this is to run the GUI tool ocfs2console. In this section, we will not only create and configure the /etc/ocfs2/cluster.conf file using ocfs2console, but will also create and start the cluster stack O2CB. When the /etc/ocfs2/cluster.conf file is not present, (as will be the case in our example), the ocfs2console tool will create this file along with a new cluster stack service (O2CB) with a default cluster name of ocfs2. This will need to be done on both Oracle RAC nodes in the cluster as the root user account:

Note that OCFS2 will be configured to use the private network (192.168.2.0) for all of its network traffic as recommended by Oracle. While OCFS2 does not take much bandwidth, it does require the nodes to be alive on the network and sends regular keepalive packets to ensure that they are. To avoid a network delay being interpreted as a node disappearing on the net which could lead to a node-self-fencing, a private interconnect is recommended. It is safe to use the same private interconnect for both Oracle RAC and OCFS2.

A popular question then is what node name should be used and should it be related to the IP address? The node name needs to match the hostname of the machine. The IP address need not be the one associated with that hostname. In other words, any valid IP address on that node can be used. OCFS2 will not attempt to match the node name (hostname) with the specified IP address.

$ su -

# ocfs2console &This will bring up the GUI as shown below:

Figure 15: ocfs2console Screen

Using the ocfs2console GUI tool, perform the following steps:

Select [Cluster] -> [Configure Nodes...]. This will start the OCFS2 Cluster Stack (Figure 16). Acknowledge this Information dialog box by clicking [Close]. You will then be presented with the "Node Configuration" dialog.

On the "Node Configurtion" dialog, click the [Add] button.

This will bring up the "Add Node" dialog.

In the "Add Node" dialog, enter the Host name and IP address for the first node in the cluster. Leave the IP Port set to its default value of 7777. In my example, I added both nodes using linux1 / 192.168.2.100 for the first node and linux2 / 192.168.2.101 for the second node.

Note: The node name you enter "must" match the hostname of the machine and the IP addresses will use the private interconnect.

Click [Apply] on the "Node Configuration" dialog - All nodes should now be "Active" as shown in Figure 17.

Click [Close] on the "Node Configuration" dialog.

After verifying all values are correct, exit the application using [File] -> [Quit]. This needs to be performed on both Oracle RAC nodes in the cluster.

Figure 16: Starting the OCFS2 Cluster Stack

The following dialog shows the OCFS2 settings I used for the node linux1 and linux2:

Figure 17: Configuring Nodes for OCFS2

After exiting the ocfs2console, you will have a /etc/ocfs2/cluster.conf similar to the following. This process needs to be completed on both Oracle RAC nodes in the cluster and the OCFS2 configuration file should be exactly the same for both of the nodes:

/etc/ocfs2/cluster.conf

node:

ip_port = 7777

ip_address = 192.168.2.100

number = 0

name = linux1

cluster = ocfs2

node:

ip_port = 7777

ip_address = 192.168.2.101

number = 1

name = linux2

cluster = ocfs2

cluster:

node_count = 2

name = ocfs2

O2CB Cluster ServiceBefore we can do anything with OCFS2 like formatting or mounting the file system, we need to first have OCFS2's cluster stack, O2CB, running (which it will be as a result of the configuration process performed above). The stack includes the following services:

NM: Node Manager that keep track of all the nodes in the cluster.conf

HB: Heart beat service that issues up/down notifications when nodes join or leave the cluster

TCP: Handles communication between the nodes

DLM: Distributed lock manager that keeps track of all locks, its owners and status

CONFIGFS: User space driven configuration file system mounted at /config

DLMFS: User space interface to the kernel space DLM

All of the above cluster services have been packaged in the o2cb system service (/etc/init.d/o2cb). Here is a short listing of some of the more useful commands and options for the o2cb system service.

The following commands are for documentation purposes only and do not need to be run when installing and configuring OCFS2 for this article!

/etc/init.d/o2cb status Module "configfs": Loaded

Filesystem "configfs": Mounted

Module "ocfs2_nodemanager": Loaded

Module "ocfs2_dlm": Loaded

Module "ocfs2_dlmfs": Loaded

Filesystem "ocfs2_dlmfs": Mounted

Checking O2CB cluster ocfs2: Online

Heartbeat dead threshold: 31

Network idle timeout: 30000

Network keepalive delay: 2000

Network reconnect delay: 2000

Checking O2CB heartbeat: Not active

/etc/init.d/o2cb offline ocfs2 Stopping O2CB cluster ocfs2: OKThe above command will offline the cluster we created, ocfs2.

/etc/init.d/o2cb unload Unmounting ocfs2_dlmfs filesystem: OK

Unloading module "ocfs2_dlmfs": OK

Unmounting configfs filesystem: OK

Unloading module "configfs": OKThe above command will unload all OCFS2 modules.

/etc/init.d/o2cb load Loading module "configfs": OK

Mounting configfs filesystem at /config: OK

Loading module "ocfs2_nodemanager": OK

Loading module "ocfs2_dlm": OK

Loading module "ocfs2_dlmfs": OK

Mounting ocfs2_dlmfs filesystem at /dlm: OKLoads all OCFS2 modules.

/etc/init.d/o2cb online ocfs2 Starting O2CB cluster ocfs2: OKThe above command will online the cluster we created, ocfs2.

Configure O2CB to Start on Boot and Adjust O2CB Heartbeat ThresholdYou now need to configure the on-boot properties of the OC2B driver so that the cluster stack services will start on each boot. You will also be adjusting the OCFS2 Heartbeat Threshold from its default setting of 31 to 61. Perform the following on both Oracle RAC nodes in the cluster:

With releases of OCFS2 prior to 1.2.1, a bug existed where the driver would not get loaded on each boot even after configuring the on-boot properties to do so. This bug was fixed in release 1.2.1 of OCFS2 and does not need to be addressed in this article. If however you are using a release of OCFS2 prior to 1.2.1, please see the Troubleshooting section for a workaround to this bug.

Set the on-boot properties as follows:# /etc/init.d/o2cb offline ocfs2

# /etc/init.d/o2cb unload

# /etc/init.d/o2cb configure

Configuring the O2CB driver.

This will configure the on-boot properties of the O2CB driver.

The following questions will determine whether the driver is loaded on

boot. The current values will be shown in brackets ('[]'). Hitting

without typing an answer will keep that current value. Ctrl-C

will abort.

Load O2CB driver on boot (y/n) [n]: y

Cluster to start on boot (Enter "none" to clear) [ocfs2]: ocfs2

Specify heartbeat dead threshold (>=7) [31]: 61

Specify network idle timeout in ms (>=5000) [30000]: 30000

Specify network keepalive delay in ms (>=1000) [2000]: 2000

Specify network reconnect delay in ms (>=2000) [2000]: 2000

Writing O2CB configuration: OK

Loading module "configfs": OK

Mounting configfs filesystem at /sys/kernel/config: OK

Loading module "ocfs2_nodemanager": OK

Loading module "ocfs2_dlm": OK

Loading module "ocfs2_dlmfs": OK

Mounting ocfs2_dlmfs filesystem at /dlm: OK

Starting O2CB cluster ocfs2: OK

Format the OCFS2 File System

Unlike the other tasks in this section, creating the OCFS2 file system should only be executed on one of nodes in the RAC cluster. I will be executing all commands in this section from linux1 only.

We can now start to make use of the iSCSI volume we partitioned for OCFS2 in the section "Create Partitions on iSCSI Volumes".

If the O2CB cluster is offline, start it. The format operation needs the cluster to be online, as it needs to ensure that the volume is not mounted on some other node in the cluster.

Earlier in this document, we created the directory /u02 under the section Create Mount Point for OCFS2 / Clusterware which will be used as the mount point for the OCFS2 cluster file system. This section contains the commands to create and mount the file system to be used for the Cluster Manager.

Note that it is possible to create and mount the OCFS2 file system using either the GUI tool ocfs2console or the command-line tool mkfs.ocfs2. From the ocfs2console utility, use the menu [Tasks] - [Format].

See the instructions below on how to create the OCFS2 file system using the command-line tool mkfs.ocfs2.

To create the file system, we can use the Oracle executable mkfs.ocfs2. For the purpose of this example, I run the following command only from linux1 as the root user account using the local SCSI device name mapped to the iSCSI volume for crs — /dev/iscsi/crs/part1. Also note that I specified a label named "oracrsfiles" which will be referred to when mounting or un-mounting the volume:$ su -

# mkfs.ocfs2 -b 4K -C 32K -N 4 -L oracrsfiles /dev/iscsi/crs/part1

mkfs.ocfs2 1.2.7

Filesystem label=oracrsfiles

Block size=4096 (bits=12)

Cluster size=32768 (bits=15)

Volume size=2145943552 (65489 clusters) (523912 blocks)

3 cluster groups (tail covers 977 clusters, rest cover 32256 clusters)

Journal size=67108864

Initial number of node slots: 4

Creating bitmaps: done

Initializing superblock: done

Writing system files: done

Writing superblock: done

Writing backup superblock: 1 block(s)

Formatting Journals: done

Writing lost+found: done

mkfs.ocfs2 successful

Mount the OCFS2 File SystemNow that the file system is created, we can mount it. Let's first do it using the command-line, then I'll show how to include it in the /etc/fstab to have it mount on each boot.

Mounting the file system will need to be performed on both nodes in the Oracle RAC cluster as the root user account using the OCFS2 label oracrsfiles!

First, here is how to manually mount the OCFS2 file system from the command-line. Remember that this needs to be performed as the root user account:$ su -

# mount -t ocfs2 -o datavolume,nointr -L "oracrsfiles" /u02If the mount was successful, you will simply get your prompt back. We should, however, run the following checks to ensure the file system is mounted correctly. Let's use the mount command to ensure that the new file system is really mounted. This should be performed on both nodes in the RAC cluster: # mount

/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)

proc on /proc type proc (rw)

sysfs on /sys type sysfs (rw)

devpts on /dev/pts type devpts (rw,gid=5,mode=620)

/dev/hda1 on /boot type ext3 (rw)

tmpfs on /dev/shm type tmpfs (rw)

none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)

cartman:SHARE2 on /cartman type nfs (rw,addr=192.168.1.120)

configfs on /sys/kernel/config type configfs (rw)

ocfs2_dlmfs on /dlm type ocfs2_dlmfs (rw)

/dev/sda1 on /u02 type ocfs2 (rw,_netdev,datavolume,nointr,heartbeat=local)

Please take note of the datavolume option I am using to mount the new file system. Oracle database users must mount any volume that will contain the Voting Disk file, Cluster Registry (OCR), Data files, Redo logs, Archive logs and Control files with the datavolume mount option so as to ensure that the Oracle processes open the files with the o_direct flag. The nointr option ensures that the I/O's are not interrupted by signals.

Any other type of volume, including an Oracle home (which I will not be using for this article), should not be mounted with this mount option.

Why does it take so much time to mount the volume? It takes around 5 seconds for a volume to mount. It does so as to let the heartbeat thread stabilize. In a later release, Oracle plans to add support for a global heartbeat, which will make most mounts instant.

Configure OCFS2 to Mount Automatically at Startup

Let's take a look at what you've have done so far. You installed the OCFS2 software packages which will be used to store the shared files needed by Cluster Manager. After going through the install, you loaded the OCFS2 module into the kernel and then formatted the clustered file system. Finally, you mounted the newly created file system using the OCFS2 label "oracrsfiles". This section walks through the steps responsible for mounting the new OCFS2 file system each time the machine(s) are booted using its label.

We start by adding the following line to the /etc/fstab file on both nodes in the RAC cluster:LABEL=oracrsfiles /u02 ocfs2 _netdev,datavolume,nointr 0 0

Notice the "_netdev" option for mounting this file system. The _netdev mount option is a must for OCFS2 volumes. This mount option indicates that the volume is to be mounted after the network is started and dismounted before the network is shutdown.

Now, let's make sure that the ocfs2.ko kernel module is being loaded and that the file system will be mounted during the boot process.

If you have been following along with the examples in this article, the actions to load the kernel module and mount the OCFS2 file system should already be enabled. However, we should still check those options by running the following on both nodes in the RAC cluster as the root user account:$ su -

# chkconfig --list o2cb

o2cb 0:off 1:off 2:on 3:on 4:on 5:on 6:offThe flags that I have marked in bold should be set to "on".

Check Permissions on New OCFS2 File SystemUse the ls command to check ownership. The permissions should be set to 0775 with owner "oracle" and group "oinstall".

The following tasks only need to be executed on one of nodes in the RAC cluster. I will be executing all commands in this section from linux1 only.

Let's first check the permissions:# ls -ld /u02

drwxr-xr-x 3 root root 4096 Dec 13 15:17 /u02As we can see from the listing above, the oracle user account (and the oinstall group) will not be able to write to this directory. Let's fix that: # chown oracle:oinstall /u02

# chmod 775 /u02Let's now go back and re-check that the permissions are correct for both Oracle RAC nodes in the cluster: # ls -ld /u02

drwxrwxr-x 3 oracle oinstall 4096 Dec 13 15:17 /u02

Create Directory for Oracle Clusterware FilesThe last mandatory task is to create the appropriate directory on the new OCFS2 file system that will be used for the Oracle Clusterware shared files. We will also modify the permissions of this new directory to allow the "oracle" owner and group "oinstall" read/write access.

The following tasks only need to be executed on one of nodes in the RAC cluster. I will be executing all commands in this section from linux1 only.# mkdir -p /u02/oradata/orcl

# chown -R oracle:oinstall /u02/oradata

# chmod -R 775 /u02/oradata

# ls -l /u02/oradata

total 4

drwxrwxr-x 2 oracle oinstall 4096 Dec 13 16:00 orcl

Reboot Both NodesBefore starting the next section, this would be a good place to reboot both of the nodes in the RAC cluster. When the machines come up, ensure that the cluster stack services are being loaded and the new OCFS2 file system is being mounted: # mount

/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)

proc on /proc type proc (rw)

sysfs on /sys type sysfs (rw)

devpts on /dev/pts type devpts (rw,gid=5,mode=620)

/dev/hda1 on /boot type ext3 (rw)

tmpfs on /dev/shm type tmpfs (rw)

none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)

cartman:SHARE2 on /cartman type nfs (rw,addr=192.168.1.120)

configfs on /sys/kernel/config type configfs (rw)

ocfs2_dlmfs on /dlm type ocfs2_dlmfs (rw)

/dev/sda1 on /u02 type ocfs2 (rw,_netdev,datavolume,nointr,heartbeat=local)

If you modified the O2CB heartbeat threshold, you should verify that it is set correctly:# cat /proc/fs/ocfs2_nodemanager/hb_dead_threshold

61

How to Determine OCFS2 VersionTo determine which version of OCFS2 is running, use: # cat /proc/fs/ocfs2/version

OCFS2 1.2.7 Mon Nov 12 15:50:25 PST 2007 (build d443ce77532cea8d1e167ab2de51b8c8)

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值