只需要将压缩包放至gridinstall下,注意:要使用grid用户去上传和解压这个文件,这样不涉及文件权限问题。
1)解压完成后,我们进入其中sshsetup文件夹下,执行以下三条命令,配置双机之间的互信。
网络上也有很多手动创建双击互信的方式,我们这边直接使用Oracle提供的脚本创建互信证书并配置互信。
./sshUserSetup.sh -user grid -hosts "rac1 rac2" -advanced -noPromptPassphrase
./sshUserSetup.sh -user oracle -hosts "rac1 rac2" -advanced -noPromptPassphrase
./sshUserSetup.sh -user root -hosts "rac1 rac2" -advanced -noPromptPassphrase
这边的内容仔细看哦,共需要输入密码四次。
-----此处我们以root用户作为示例
[grid@rac1 sshsetup]$ ./sshUserSetup.sh -user root -hosts "rac1 rac2" -advanced -noPromptPassphrase
The output of this script is also logged into /tmp/sshUserSetup_2021-02-18-15-57-01.log
Hosts are rac1 rac2
user is root
Platform:- Linux
Checking if the remote hosts are reachable
PING rac1 (192.168.8.1) 56(84) bytes of data.
64 bytes from rac1 (192.168.8.1): icmp_seq=1 ttl=64 time=0.014 ms
64 bytes from rac1 (192.168.8.1): icmp_seq=2 ttl=64 time=0.022 ms
64 bytes from rac1 (192.168.8.1): icmp_seq=3 ttl=64 time=0.022 ms
64 bytes from rac1 (192.168.8.1): icmp_seq=4 ttl=64 time=0.021 ms
64 bytes from rac1 (192.168.8.1): icmp_seq=5 ttl=64 time=0.060 ms
--- rac1 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4001ms
rtt min/avg/max/mdev = 0.014/0.027/0.060/0.017 ms
PING rac2 (192.168.8.2) 56(84) bytes of data.
64 bytes from rac2 (192.168.8.2): icmp_seq=1 ttl=64 time=0.419 ms
64 bytes from rac2 (192.168.8.2): icmp_seq=2 ttl=64 time=0.479 ms
64 bytes from rac2 (192.168.8.2): icmp_seq=3 ttl=64 time=0.572 ms
64 bytes from rac2 (192.168.8.2): icmp_seq=4 ttl=64 time=0.463 ms
64 bytes from rac2 (192.168.8.2): icmp_seq=5 ttl=64 time=0.512 ms
--- rac2 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4009ms
rtt min/avg/max/mdev = 0.419/0.489/0.572/0.051 ms
Remote host reachability check succeeded.
The following hosts are reachable: rac1 rac2.
The following hosts are not reachable: .
All hosts are reachable. Proceeding further...
firsthost rac1
numhosts 2
The script will setup SSH connectivity from the host rac1 to all
the remote hosts. After the script is executed, the user can use SSH to run
commands on the remote hosts or copy files between this host rac1
and the remote hosts without being prompted for passwords or confirmations.
NOTE 1:
As part of the setup procedure, this script will use ssh and scp to copy
files between the local host and the remote hosts. Since the script does not
store passwords, you may be prompted for the passwords during the execution of
the script whenever ssh or scp is invoked.
NOTE 2:
AS PER SSH REQUIREMENTS, THIS SCRIPT WILL SECURE THE USER HOME DIRECTORY
AND THE .ssh DIRECTORY BY REVOKING GROUP AND WORLD WRITE PRIVILEDGES TO THESE
directories.
----此处输入yes
Do you want to continue and let the script make the above mentioned changes (yes/no)?
yes
The user chose yes
User chose to skip passphrase related questions.
Creating .ssh directory on local host, if not present already
Creating authorized_keys file on local host
Changing permissions on authorized_keys to 644 on local host
Creating known_hosts file on local host
Changing permissions on known_hosts to 644 on local host
Creating config file on local host
If a config file exists already at /home/grid/.ssh/config, it would be backed up to /home/grid/.ssh/config.backup.
Creating .ssh directory and setting permissions on remote host rac1
THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR root. THIS IS AN SSH REQUIREMENT.
The script would create ~root/.ssh/config file on remote host rac1. If a config file exists already at ~root/.ssh/config, it would be backed up to ~root/.ssh/config.backup.
The user may be prompted for a password here since the script would be running SSH on host rac1.
Warning: Permanently added 'rac1,192.168.8.1' (ECDSA) to the list of known hosts.
----提示输入密码输入即可,共需要输入四次。
root@rac1's password:
Done with creating .ssh directory and setting permissions on remote host rac1.
Creating .ssh directory and setting permissions on remote host rac2
THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR root. THIS IS AN SSH REQUIREMENT.
The script would create ~root/.ssh/config file on remote host rac2. If a config file exists already at ~root/.ssh/config, it would be backed up to ~root/.ssh/config.backup.
The user may be prompted for a password here since the script would be running SSH on host rac2.
Warning: Permanently added 'rac2,192.168.8.2' (ECDSA) to the list of known hosts.
root@rac2's password:
Done with creating .ssh directory and setting permissions on remote host rac2.
Copying local host public key to the remote host rac1
The user may be prompted for a password or passphrase here since the script would be using SCP for host rac1.
root@rac1's password:
Done copying local host public key to the remote host rac1
Copying local host public key to the remote host rac2
The user may be prompted for a password or passphrase here since the script would be using SCP for host rac2.
root@rac2's password:
Done copying local host public key to the remote host rac2
Creating keys on remote host rac1 if they do not exist already. This is required to setup SSH on host rac1.
Generating public/private rsa key pair.
Your identification has been saved in .ssh/id_rsa.
Your public key has been saved in .ssh/id_rsa.pub.
The key fingerprint is:
SHA256:EESQj0T+SQDn3YtAqLu0D3wyJIKfpuguSET8LCCRtP8 root@rac1
The key's randomart image is:
+---[RSA 1024]----+
|++.+*=+ |
|+oo=oo o |
|+oo.+o+ . |
|oo.o.+.+ . |
|=o.. + S |
|*+ .. |
|+*=. E |
|=+= |
|*o.. |
+----[SHA256]-----+
Creating keys on remote host rac2 if they do not exist already. This is required to setup SSH on host rac2.
Generating public/private rsa key pair.
Your identification has been saved in .ssh/id_rsa.
Your public key has been saved in .ssh/id_rsa.pub.
The key fingerprint is:
SHA256:VMUvUngb2qKb/KPhwXykmH3tFvBJWvvf4etcVDc6ufg root@rac2
The key's randomart image is:
+---[RSA 1024]----+
| .+. |
| .. = |
| . = + .o|
| . = * + +|
| S..O B .|
| *.o..* o. |
| o.Boo..+ ..|
| .+=..o + +|
| oo.o.E.*+|
+----[SHA256]-----+
Updating authorized_keys file on remote host rac1
Updating known_hosts file on remote host rac1
Updating authorized_keys file on remote host rac2
Updating known_hosts file on remote host rac2
SSH setup is complete.
------------------------------------------------------------------------
Verifying SSH setup
===================
The script will now run the date command on the remote nodes using ssh
to verify if ssh is setup correctly. IF THE SETUP IS CORRECTLY SETUP,
THERE SHOULD BE NO OUTPUT OTHER THAN THE DATE AND SSH SHOULD NOT ASK FOR
PASSWORDS. If you see any output other than date or are prompted for the
password, ssh is not setup correctly and you will need to resolve the
issue and set up ssh again.
The possible causes for failure could be:
1. The server settings in /etc/ssh/sshd_config file do not allow ssh
for user root.
2. The server may have disabled public key based authentication.
3. The client public key on the server may be outdated.
4. ~root or ~root/.ssh on the remote host may not be owned by root.
5. User may not have passed -shared option for shared remote users or
may be passing the -shared option for non-shared remote users.
6. If there is output in addition to the date, but no password is asked,
it may be a security alert shown as part of company policy. Append the
additional text to the <OMS HOME>/sysman/prov/resources/ignoreMessages.txt file.
------------------------------------------------------------------------
--rac1:--
Running /usr/bin/ssh -x -l root rac1 date to verify SSH connectivity has been setup from local host to rac1.
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR.
Thu Feb 18 15:57:31 CST 2021
------------------------------------------------------------------------
--rac2:--
Running /usr/bin/ssh -x -l root rac2 date to verify SSH connectivity has been setup from local host to rac2.
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR.
Thu Feb 18 23:57:37 CST 2021
------------------------------------------------------------------------
------------------------------------------------------------------------
Verifying SSH connectivity has been setup from rac1 to rac1
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL.
bash: -c: line 0: unexpected EOF while looking for matching `"'
bash: -c: line 1: syntax error: unexpected end of file
------------------------------------------------------------------------
------------------------------------------------------------------------
Verifying SSH connectivity has been setup from rac1 to rac2
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL.
bash: -c: line 0: unexpected EOF while looking for matching `"'
bash: -c: line 1: syntax error: unexpected end of file
------------------------------------------------------------------------
-Verification from complete-
SSH verification complete.
至此成功。
2)安装必要的软件包,避免检查时报错过多。
yum -y install elfutils-libelf elfutils-libelf-devel elfutils-libelf-devel-static libaio-devel compat-libstdc++-33.x86_64
[root@rac1 shell]# rpm -ivh pdksh-5.2.14-37.el5_8.1.x86_64.rpm
warning: pdksh-5.2.14-37.el5_8.1.x86_64.rpm: Header V3 DSA/SHA1 Signature, key ID e8562897: NOKEY
Preparing... ################################# [100%]
Updating / installing...
1:pdksh-5.2.14-37.el5_8.1 ################################# [100%]
进入安装包下的rpm文件夹,执行
[root@rac1 rpm]# rpm -ivh cvuqdisk-1.0.9-1.rpm
Preparing... ################################# [100%]
Using default group oinstall to install package
Updating / installing...
1:cvuqdisk-1.0.9-1 ################################# [100%]
3) 进行安装前的检查工作
很多地方都是到最后安装的时候出现报错进行检查,我这边不建议这样做。我们先检查后面安装会方便很多。官方的检查工具检查的很全面。
[grid@rac1 grid]$ ./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -verbose
检测过程如下:
[grid@rac1 grid]$ ./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -verbose
Performing pre-checks for cluster services setup
Checking node reachability...
Check: Node reachability from node "rac1"
Destination Node Reachable?
------------------------------------ ------------------------
rac2 yes
rac1 yes
Result: Node reachability check passed from node "rac1"
Checking user equivalence...
Check: User equivalence for user "grid"
Node Name Status
------------------------------------ ------------------------
rac2 passed
rac1 passed
Result: User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file...
Node Name Status
------------------------------------ ------------------------
rac2 passed
rac1 passed
Verification of the hosts config file successful
Interface information for node "rac2"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
ens33 192.168.8.2 192.168.8.0 0.0.0.0 10.10.1.1 00:0C:29:D0:BA:8A 1500
ens33 192.168.8.202 192.168.8.0 0.0.0.0 10.10.1.1 00:0C:29:D0:BA:8A 1500
ens34 10.10.1.202 10.10.1.0 0.0.0.0 10.10.1.1 00:0C:29:D0:BA:94 1500
Interface information for node "rac1"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
ens33 192.168.8.1 192.168.8.0 0.0.0.0 10.10.1.1 00:0C:29:F1:68:F5 1500
ens33 192.168.8.101 192.168.8.0 0.0.0.0 10.10.1.1 00:0C:29:F1:68:F5 1500
ens34 10.10.1.101 10.10.1.0 0.0.0.0 10.10.1.1 00:0C:29:F1:68:FF 1500
Check: Node connectivity of subnet "192.168.8.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
rac2[192.168.8.2] rac2[192.168.8.202] yes
rac2[192.168.8.2] rac1[192.168.8.1] yes
rac2[192.168.8.2] rac1[192.168.8.101] yes
rac2[192.168.8.202] rac1[192.168.8.1] yes
rac2[192.168.8.202] rac1[192.168.8.101] yes
rac1[192.168.8.1] rac1[192.168.8.101] yes
Result: Node connectivity passed for subnet "192.168.8.0" with node(s) rac2,rac1
Check: TCP connectivity of subnet "192.168.8.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
rac1:192.168.8.1 rac2:192.168.8.2 passed
rac1:192.168.8.1 rac2:192.168.8.202 passed
rac1:192.168.8.1 rac1:192.168.8.101 passed
Result: TCP connectivity check passed for subnet "192.168.8.0"
Check: Node connectivity of subnet "10.10.1.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
rac2[10.10.1.202] rac1[10.10.1.101] yes
Result: Node connectivity passed for subnet "10.10.1.0" with node(s) rac2,rac1
Check: TCP connectivity of subnet "10.10.1.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
rac1:10.10.1.101 rac2:10.10.1.202 passed
Result: TCP connectivity check passed for subnet "10.10.1.0"
Interfaces found on subnet "10.10.1.0" that are likely candidates for VIP are:
rac2 ens34:10.10.1.202
rac1 ens34:10.10.1.101
Interfaces found on subnet "192.168.8.0" that are likely candidates for a private interconnect are:
rac2 ens33:192.168.8.2 ens33:192.168.8.202
rac1 ens33:192.168.8.1 ens33:192.168.8.101
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.8.0".
Subnet mask consistency check passed for subnet "10.10.1.0".
Subnet mask consistency check passed.
Result: Node connectivity check passed
Checking multicast communication...
Checking subnet "192.168.8.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.8.0" for multicast communication with multicast group "230.0.1.0" passed.
Checking subnet "10.10.1.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "10.10.1.0" for multicast communication with multicast group "230.0.1.0" passed.
Check of multicast communication passed.
Checking ASMLib configuration.
Node Name Status
------------------------------------ ------------------------
rac2 passed
rac1 passed
Result: Check for ASMLib configuration passed.
Check: Total memory
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 3.8396GB (4026164.0KB) 1.5GB (1572864.0KB) passed
rac1 3.8396GB (4026164.0KB) 1.5GB (1572864.0KB) passed
Result: Total memory check passed
Check: Available memory
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 3.1981GB (3353448.0KB) 50MB (51200.0KB) passed
rac1 2.9084GB (3049664.0KB) 50MB (51200.0KB) passed
Result: Available memory check passed
Check: Swap space
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 3.875GB (4063228.0KB) 3.8396GB (4026164.0KB) passed
rac1 3.875GB (4063228.0KB) 3.8396GB (4026164.0KB) passed
Result: Swap space check passed
Check: Free disk space for "rac2:/tmp"
Path Node Name Mount point Available Required Status
---------------- ------------ ------------ ------------ ------------ ------------
/tmp rac2 / 32.7637GB 1GB passed
Result: Free disk space check passed for "rac2:/tmp"
Check: Free disk space for "rac1:/tmp"
Path Node Name Mount point Available Required Status
---------------- ------------ ------------ ------------ ------------ ------------
/tmp rac1 / 30.0753GB 1GB passed
Result: Free disk space check passed for "rac1:/tmp"
Check: User existence for "grid"
Node Name Status Comment
------------ ------------------------ ------------------------
rac2 passed exists(201)
rac1 passed exists(201)
Checking for multiple users with UID value 201
Result: Check for multiple users with UID value 201 passed
Result: User existence check passed for "grid"
Check: Group existence for "oinstall"
Node Name Status Comment
------------ ------------------------ ------------------------
rac2 passed exists
rac1 passed exists
Result: Group existence check passed for "oinstall"
Check: Group existence for "dba"
Node Name Status Comment
------------ ------------------------ ------------------------
rac2 passed exists
rac1 passed exists
Result: Group existence check passed for "dba"
Check: Membership of user "grid" in group "oinstall" [as Primary]
Node Name User Exists Group Exists User in Group Primary Status
---------------- ------------ ------------ ------------ ------------ ------------
rac2 yes yes yes yes passed
rac1 yes yes yes yes passed
Result: Membership check for user "grid" in group "oinstall" [as Primary] passed
Check: Membership of user "grid" in group "dba"
Node Name User Exists Group Exists User in Group Status
---------------- ------------ ------------ ------------ ----------------
rac2 yes yes yes passed
rac1 yes yes yes passed
Result: Membership check for user "grid" in group "dba" passed
Check: Run level
Node Name run level Required Status
------------ ------------------------ ------------------------ ----------
rac2 5 3,5 passed
rac1 5 3,5 passed
Result: Run level check passed
Check: Hard limits for "maximum open file descriptors"
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
rac2 hard 65536 65536 passed
rac1 hard 65536 65536 passed
Result: Hard limits check passed for "maximum open file descriptors"
Check: Soft limits for "maximum open file descriptors"
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
rac2 soft 1024 1024 passed
rac1 soft 1024 1024 passed
Result: Soft limits check passed for "maximum open file descriptors"
Check: Hard limits for "maximum user processes"
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
rac2 hard 16384 16384 passed
rac1 hard 16384 16384 passed
Result: Hard limits check passed for "maximum user processes"
Check: Soft limits for "maximum user processes"
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
rac2 soft 16384 2047 passed
rac1 soft 16384 2047 passed
Result: Soft limits check passed for "maximum user processes"
Check: System architecture
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 x86_64 x86_64 passed
rac1 x86_64 x86_64 passed
Result: System architecture check passed
Check: Kernel version
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 3.10.0-1127.el7.x86_64 2.6.9 passed
rac1 3.10.0-1127.el7.x86_64 2.6.9 passed
Result: Kernel version check passed
Check: Kernel parameter for "semmsl"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac2 250 250 250 passed
rac1 250 250 250 passed
Result: Kernel parameter check passed for "semmsl"
Check: Kernel parameter for "semmns"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac2 32000 32000 32000 passed
rac1 32000 32000 32000 passed
Result: Kernel parameter check passed for "semmns"
Check: Kernel parameter for "semopm"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac2 100 100 100 passed
rac1 100 100 100 passed
Result: Kernel parameter check passed for "semopm"
Check: Kernel parameter for "semmni"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac2 128 128 128 passed
rac1 128 128 128 passed
Result: Kernel parameter check passed for "semmni"
Check: Kernel parameter for "shmmax"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac2 2147483648 2147483648 2061395968 passed
rac1 2147483648 2147483648 2061395968 passed
Result: Kernel parameter check passed for "shmmax"
Check: Kernel parameter for "shmmni"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac2 4096 4096 4096 passed
rac1 4096 4096 4096 passed
Result: Kernel parameter check passed for "shmmni"
Check: Kernel parameter for "shmall"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac2 2097152 2097152 2097152 passed
rac1 2097152 2097152 2097152 passed
Result: Kernel parameter check passed for "shmall"
Check: Kernel parameter for "file-max"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac2 6815744 6815744 6815744 passed
rac1 6815744 6815744 6815744 passed
Result: Kernel parameter check passed for "file-max"
Check: Kernel parameter for "ip_local_port_range"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac2 between 9000.0 & 65500.0 between 9000.0 & 65500.0 between 9000.0 & 65500.0 passed
rac1 between 9000.0 & 65500.0 between 9000.0 & 65500.0 between 9000.0 & 65500.0 passed
Result: Kernel parameter check passed for "ip_local_port_range"
Check: Kernel parameter for "rmem_default"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac2 262144 262144 262144 passed
rac1 262144 262144 262144 passed
Result: Kernel parameter check passed for "rmem_default"
Check: Kernel parameter for "rmem_max"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac2 4194304 4194304 4194304 passed
rac1 4194304 4194304 4194304 passed
Result: Kernel parameter check passed for "rmem_max"
Check: Kernel parameter for "wmem_default"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac2 262144 262144 262144 passed
rac1 262144 262144 262144 passed
Result: Kernel parameter check passed for "wmem_default"
Check: Kernel parameter for "wmem_max"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac2 1048576 1048576 1048576 passed
rac1 1048576 1048576 1048576 passed
Result: Kernel parameter check passed for "wmem_max"
Check: Kernel parameter for "aio-max-nr"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac2 1048576 1048576 1048576 passed
rac1 1048576 1048576 1048576 passed
Result: Kernel parameter check passed for "aio-max-nr"
Check: Package existence for "make"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 make-3.82-24.el7 make-3.80 passed
rac1 make-3.82-24.el7 make-3.80 passed
Result: Package existence check passed for "make"
Check: Package existence for "binutils"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 binutils-2.27-43.base.el7 binutils-2.15.92.0.2 passed
rac1 binutils-2.27-43.base.el7 binutils-2.15.92.0.2 passed
Result: Package existence check passed for "binutils"
Check: Package existence for "gcc(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 gcc(x86_64)-4.8.5-39.el7 gcc(x86_64)-3.4.6 passed
rac1 gcc(x86_64)-4.8.5-39.el7 gcc(x86_64)-3.4.6 passed
Result: Package existence check passed for "gcc(x86_64)"
Check: Package existence for "libaio(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 libaio(x86_64)-0.3.109-13.el7 libaio(x86_64)-0.3.105 passed
rac1 libaio(x86_64)-0.3.109-13.el7 libaio(x86_64)-0.3.105 passed
Result: Package existence check passed for "libaio(x86_64)"
Check: Package existence for "glibc(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 glibc(x86_64)-2.17-307.el7.1 glibc(x86_64)-2.3.4-2.41 passed
rac1 glibc(x86_64)-2.17-307.el7.1 glibc(x86_64)-2.3.4-2.41 passed
Result: Package existence check passed for "glibc(x86_64)"
Check: Package existence for "compat-libstdc++-33(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 compat-libstdc++-33(x86_64)-3.2.3-72.el7 compat-libstdc++-33(x86_64)-3.2.3 passed
rac1 compat-libstdc++-33(x86_64)-3.2.3-72.el7 compat-libstdc++-33(x86_64)-3.2.3 passed
Result: Package existence check passed for "compat-libstdc++-33(x86_64)"
Check: Package existence for "elfutils-libelf(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 elfutils-libelf(x86_64)-0.176-5.el7 elfutils-libelf(x86_64)-0.97 passed
rac1 elfutils-libelf(x86_64)-0.176-5.el7 elfutils-libelf(x86_64)-0.97 passed
Result: Package existence check passed for "elfutils-libelf(x86_64)"
Check: Package existence for "elfutils-libelf-devel"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 elfutils-libelf-devel-0.176-5.el7 elfutils-libelf-devel-0.97 passed
rac1 elfutils-libelf-devel-0.176-5.el7 elfutils-libelf-devel-0.97 passed
Result: Package existence check passed for "elfutils-libelf-devel"
Check: Package existence for "glibc-common"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 glibc-common-2.17-307.el7.1 glibc-common-2.3.4 passed
rac1 glibc-common-2.17-307.el7.1 glibc-common-2.3.4 passed
Result: Package existence check passed for "glibc-common"
Check: Package existence for "glibc-devel(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 glibc-devel(x86_64)-2.17-307.el7.1 glibc-devel(x86_64)-2.3.4 passed
rac1 glibc-devel(x86_64)-2.17-307.el7.1 glibc-devel(x86_64)-2.3.4 passed
Result: Package existence check passed for "glibc-devel(x86_64)"
Check: Package existence for "glibc-headers"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 glibc-headers-2.17-307.el7.1 glibc-headers-2.3.4 passed
rac1 glibc-headers-2.17-307.el7.1 glibc-headers-2.3.4 passed
Result: Package existence check passed for "glibc-headers"
Check: Package existence for "gcc-c++(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 gcc-c++(x86_64)-4.8.5-39.el7 gcc-c++(x86_64)-3.4.6 passed
rac1 gcc-c++(x86_64)-4.8.5-39.el7 gcc-c++(x86_64)-3.4.6 passed
Result: Package existence check passed for "gcc-c++(x86_64)"
Check: Package existence for "libaio-devel(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 libaio-devel(x86_64)-0.3.109-13.el7 libaio-devel(x86_64)-0.3.105 passed
rac1 libaio-devel(x86_64)-0.3.109-13.el7 libaio-devel(x86_64)-0.3.105 passed
Result: Package existence check passed for "libaio-devel(x86_64)"
Check: Package existence for "libgcc(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 libgcc(x86_64)-4.8.5-39.el7 libgcc(x86_64)-3.4.6 passed
rac1 libgcc(x86_64)-4.8.5-39.el7 libgcc(x86_64)-3.4.6 passed
Result: Package existence check passed for "libgcc(x86_64)"
Check: Package existence for "libstdc++(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 libstdc++(x86_64)-4.8.5-39.el7 libstdc++(x86_64)-3.4.6 passed
rac1 libstdc++(x86_64)-4.8.5-39.el7 libstdc++(x86_64)-3.4.6 passed
Result: Package existence check passed for "libstdc++(x86_64)"
Check: Package existence for "libstdc++-devel(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 libstdc++-devel(x86_64)-4.8.5-39.el7 libstdc++-devel(x86_64)-3.4.6 passed
rac1 libstdc++-devel(x86_64)-4.8.5-39.el7 libstdc++-devel(x86_64)-3.4.6 passed
Result: Package existence check passed for "libstdc++-devel(x86_64)"
Check: Package existence for "sysstat"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 sysstat-10.1.5-19.el7 sysstat-5.0.5 passed
rac1 sysstat-10.1.5-19.el7 sysstat-5.0.5 passed
Result: Package existence check passed for "sysstat"
Check: Package existence for "pdksh"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 pdksh-5.2.14-37.el5_8.1 pdksh-5.2.14 passed
rac1 pdksh-5.2.14-37.el5_8.1 pdksh-5.2.14 passed
Result: Package existence check passed for "pdksh"
Check: Package existence for "expat(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 expat(x86_64)-2.1.0-11.el7 expat(x86_64)-1.95.7 passed
rac1 expat(x86_64)-2.1.0-11.el7 expat(x86_64)-1.95.7 passed
Result: Package existence check passed for "expat(x86_64)"
Checking for multiple users with UID value 0
Result: Check for multiple users with UID value 0 passed
Check: Current group ID
Result: Current group ID check passed
Starting check for consistency of primary group of root user
Node Name Status
------------------------------------ ------------------------
rac2 passed
rac1 passed
Check for consistency of root user's primary group passed
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP Configuration file check started...
Network Time Protocol(NTP) configuration file not found on any of the nodes. Oracle Cluster Time Synchronization Service(CTSS) can be used instead of NTP for time synchronization on the cluster nodes
No NTP Daemons or Services were found to be running
Result: Clock synchronization check using Network Time Protocol(NTP) passed
Checking Core file name pattern consistency...
Core file name pattern consistency check passed.
Checking to make sure user "grid" is not in "root" group
Node Name Status Comment
------------ ------------------------ ------------------------
rac2 passed does not exist
rac1 passed does not exist
Result: User "grid" is not part of "root" group. Check passed
Check default user file creation mask
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 0022 0022 passed
rac1 0022 0022 passed
Result: Default user file creation mask check passed
Checking consistency of file "/etc/resolv.conf" across nodes
Checking the file "/etc/resolv.conf" to make sure only one of domain and search entries is defined
File "/etc/resolv.conf" does not have both domain and search entries defined
Checking if domain entry in file "/etc/resolv.conf" is consistent across the nodes...
domain entry in file "/etc/resolv.conf" is consistent across nodes
Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes...
search entry in file "/etc/resolv.conf" is consistent across nodes
Checking file "/etc/resolv.conf" to make sure that only one search entry is defined
All nodes have one search entry defined in file "/etc/resolv.conf"
Checking all nodes to make sure that search entry is "223.6.6.6" as found on node "rac2"
All nodes of the cluster have same value for 'search'
Checking DNS response time for an unreachable node
Node Name Status
------------------------------------ ------------------------
rac2 passed
rac1 passed
The DNS response time for an unreachable node is within acceptable limit on all nodes
File "/etc/resolv.conf" is consistent across nodes
Check: Time zone consistency
Result: Time zone consistency check passed
Pre-check for cluster services setup was successful.
注意,这边只能时successful,这样我们安装过程中的报错才会尽可能减少。
4)下面我们进行安装。
[grid@rac1 grid]$ ./runInstaller
这边如果有语言的需求,可以自行选择添加。
这边点击下一步检查会耗时久一点,所以我们需要前期做好检测。
选择Add添加节点二,按照hosts 设置的填写,然后在 next,如果节点一和节点二互信没配好,下一步会报错。此步骤检测时间也较长。
此处执行检查,主要包括环境变量、安装包等,所以之前我们建议先执行检查。
这边我们可以看到,检测完成之后直接可以安装,没有warning和fail。
安装到这我们会看到两个脚本需要以root用户执行。
OK,我们开始执行脚本。
[root@rac1 oraInventory]# ./orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@rac1 grid]# ./root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0.4/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0.4/grid/crs/install/crsconfig_params
Creating trace directory
Installing Trace File Analyzer
OLR initialization - successful
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding Clusterware entries to inittab
ohasd failed to start
Failed to start the Clusterware. Last 20 lines of the alert log follow: //这边我们可以看到报错信息
2021-02-18 17:54:01.942:
[client(80812)]CRS-2101:The OLR was formatted using version 3.
/u01/app/11.2.0.4/grid/perl/bin/perl -I/u01/app/11.2.0.4/grid/perl/lib -I/u01/app/11.2.0.4/grid/crs/install /u01/app/11.2.0.4/grid/crs/install/rootcrs.pl execution failed
这边我们看到报错~
解决报错的方案:
以上失败的原因,是在centos 7/redhat 7里面启动方式为Systemd ,而这个脚本的启动方式为init 是 centos 6/redhat 6 里面的启动方式。
我们需要先执行root.sh才可执行以下操作,因为init.ohasd服务需要在root执行中创建。两台机子均需要按照这样的流程去执行,也不可以同时执行root.sh否则会出现报错。
解决方法如下:以 root 用户创建服务文件
[root@rac1 /]# touch /usr/lib/systemd/system/ohas.service
[root@rac1 /]# chmod 777 /usr/lib/systemd/system/ohas.service
在文件里面添加以下内容
[root@rac1 /]# cat /usr/lib/systemd/system/ohas.service
[Unit]
Description=Oracle High Availability Services
After=syslog.target
[Service]
ExecStart=/etc/init.d/init.ohasd run >/dev/null 2>&1 Type=simple
Restart=always
[Install]
WantedBy=multi-user.target
启动服务并设置开机自启动
[root@rac1 /]# systemctl daemon-reload
[root@rac1 /]# systemctl start ohas.service
[root@rac1 /]# systemctl enable ohas.service
[root@rac1 /]# systemctl status ohas.service //查看服务状态
● ohas.service - Oracle High Availability Services
Loaded: loaded (/usr/lib/systemd/system/ohas.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2019-04-02 19:57:54 CST; 27s ago
Main PID: 26958 (init.ohasd)
CGroup: /system.slice/ohas.service
└─26958 /bin/sh /etc/init.d/init.ohasd run >/dev/null 2>&1 Type=simple
Apr 02 19:57:54 rac2 systemd[1]: Started Oracle High Availability Services.
Apr 02 19:57:54 rac2 systemd[1]: Starting Oracle High Availability Services...
上面的服务创建完成之后,我们需要再次执行root.sh
[root@rac1 grid]# ./root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0.4/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0.4/grid/crs/install/crsconfig_params
Installing Trace File Analyzer
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
ASM created and started successfully.
Disk Group asmdisk_ocr created successfully.
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 6cc0ddd34afb4f55bfb0c30c28d199a4.
Successful addition of voting disk 588adcc5aca14f6bbf549b83e2f095e5.
Successful addition of voting disk 31800fc7ddce4ff8bfa7bee3bcf929d8.
Successfully replaced voting disk group with +asmdisk_ocr.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 6cc0ddd34afb4f55bfb0c30c28d199a4 (/dev/mapper/sdb_asmocr1) [ASMDISK_OCR]
2. ONLINE 588adcc5aca14f6bbf549b83e2f095e5 (/dev/mapper/sdc_asmocr2) [ASMDISK_OCR]
3. ONLINE 31800fc7ddce4ff8bfa7bee3bcf929d8 (/dev/mapper/sdd_asmocr3) [ASMDISK_OCR]
Located 3 voting disk(s).
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.ASMDISK_OCR.dg' on 'rac1'
CRS-2676: Start of 'ora.ASMDISK_OCR.dg' on 'rac1' succeeded
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
rac2节点执行报错及说明
[root@rac2 grid]# ./root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0.4/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0.4/grid/crs/install/crsconfig_params
Installing Trace File Analyzer
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
这边CRS-4402报错是由于:第一个节点已经起来了,第二个节点要加入这个集群,不能再以独占方式启动。
到这边位置我们脚本执行就算完成了,下面要进行继续的安装工作。
这边出错的原因:该报错由于配置了/etc/hosts来解析SCAN,导致未走DNS来进行SCAN的解析,爆出此错误,可以考虑忽略掉,或者删除/etc/hosts文件中的SCAN解析部分,并且再次通过nslookup验证DNS的解析是否正常即可。
我们这边如果不放心可以两台节点都ping一下scan ip,看一下是否可ping通,能ping通就没有问题。
这边的问题我们直接忽略,下一步即可。
转载时,各位小伙伴记得注明出处哦~~关注一下博主吧