racle 10G rac集群碰到…

racle 10G rac集群碰到的问题裸设备

<wbr></wbr>

http://www.oracle-base.com/articles/10g/OracleDB10gR2RACInstalla<wbr>tionOnCentos4UsingVMware<wbr>.php</wbr></wbr>
http://www.comp.dit.ie/btierney/Oracle11gDoc/install.111/b28263/crsunix.htm#insertedID1
Caught Cluster ExceptionPRKC-1044 : Failed to check remote command execution setup for node localHost using shells /usr/bin/ssh and /usr/bin/rsh
localhost: Connection refused
[PRKC-1044 : Failed to check remote command execution setup for node localHost using shells /usr/bin/ssh and /usr/bin/rsh
localhost: Connection refused]
[PRKC-1044 : Failed to check remote command execution setup for node localHost using shells /usr/bin/ssh and /usr/bin/rsh
localhost: Connection refused
[PRKC-1044 : Failed to check remote command execution setup for node localHost using shells /usr/bin/ssh and /usr/bin/rsh
localhost: Connection refused]]

解决办法:一般是ssh没配置好引起的

1. $cd $HOME
2.$ mkdir ~/.ssh
3. $chmod 700 ~/.ssh
4. $/usr/bin/ssh-keygen -t rsa
5. $/usr/bin/ssh-keygen -t dsa


On Node 1:

1. $cd $HOME/.ssh
2.$ cat id_rsa.pub >> authorized_keys
3.$ cat id_dsa.pub >> authorized_keys
4. Copy the authorized_keys file to the node
5. $scp authorized_keys vrh4:/home/oracle/.ssh


On Node 2:

1. $cd $HOME/.ssh
2. $cat id_rsa.pub >> authorized_keys
3. $cat id_dsa.pub >> authorized_keys
4. $scp authorized_keys vrh3:/home/oracle/.ssh

Then I ran following commands:

On Node 1:

$ exec /usr/bin/ssh-agent $SHELL
$ /usr/bin/ssh-add

On Node 2:

$ exec /usr/bin/ssh-agent $SHELL
$ /usr/bin/ssh-add

reference:http://forums.oracle.com/forums/thread.jspa?threadID=589398&tstart=-1

you are missing local node,localhost infomation,
enter the public node name,the private node name and the virtual hostname for all nodes in the cluster,including local node
http://www.itk.ilstu.edu/docs/Oracle/rac.101/b10765.pdf

<wbr></wbr>

mount.ocfs2: Invalid argument while mounting:1.把内核和ocfs2都升级到最新版本(2.6.9-34)﹐用到的包有:
<wbr><wbr> mkinitrd-4.2.1.6-1.i386.rpm<br><wbr><wbr> kernel-smp-2.6.9-34.EL.i686.rpm<br><wbr><wbr> ocfs2-2.6.9-34.ELsmp-1.2.1-1.i686.rpm<br><wbr><wbr> ocfs2-tools-1.2.1-1.i386.rpm<br><wbr><wbr> ocfs2console-1.2.1-1.i386.rpm<br><wbr><wbr> ocfs2-tools-debuginfo-1.2.1-1.i386.rpm<br> 2.在两个节点的vmx中除了加入disk.locking = "false"之外﹐还增加了下面几项﹕<br><wbr><wbr> diskLib.dataCacheMaxSize = "0"<br> diskLib.dataCacheMaxReadAheadSiz<wbr>e = "0"<br> diskLib.DataCacheMinReadAheadSiz<wbr>e = "0"<br> diskLib.dataCachePageSize = "4096"<br> diskLib.maxUnsyncedWrites = "0"</wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr>

SELinux disable<wbr><wbr> 掉<wbr><wbr> 100% success</wbr></wbr></wbr></wbr>

<wbr></wbr>

3.安装oracle clusterware时,添加接点后出现the specified nodes are not clusterable的提示

解决:在安装的这个会话中执行ssh rac1 date,ssh rac2 date,ssh rac1-priv date ,ssh rac2 date即可,主要是让在该会话中ssh时SSH的验证不用输入"yes",即在ssh时,不用密码提示,也不用"yes"提示.

<wbr></wbr>

4.安装oracle database mount diskgroups的时候:

could not mount the diskgroup on remote node rac2 using connection
service rac2:1521:+ASM2.Ensure that the listener is running on this
node and the ASM instance is registered to the listener.received the
following error:
ora-15110:no diskgroups mounted
这个时候一个一个的mount(即:mount DG1,再mount ),不要moun all,这样会出上面的错误。

<wbr></wbr>

5.Expecting the CRS daemons to be up within 600 seconds.
Giving up: Oracle CSS stack appears NOT to be running.
Oracle CSS service would not start as installed
Automatic Storage Management(ASM) cannot be used until Oracle CSS service is
started
<wbr></wbr>

<wbr></wbr>

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值