Redhat5.7下oracle RAC 10.2.0.5 升级到11.2.0.4

1.升级grid

当前rac状态:

  
  
  1. [root@rac1 ~]# crs_stat -t
  2. NameTypeTargetStateHost
  3. ------------------------------------------------------------
  4. ora.rac.db application ONLINE ONLINE rac1
  5. ora....c1.inst application ONLINE ONLINE rac1
  6. ora....c2.inst application ONLINE ONLINE rac2
  7. ora....SM1.asm application ONLINE ONLINE rac1
  8. ora....C1.lsnr application ONLINE ONLINE rac1
  9. ora.rac1.gsd application ONLINE ONLINE rac1
  10. ora.rac1.ons application ONLINE ONLINE rac1
  11. ora.rac1.vip application ONLINE ONLINE rac1
  12. ora....SM2.asm application ONLINE ONLINE rac2
  13. ora....C2.lsnr application ONLINE ONLINE rac2
  14. ora.rac2.gsd application ONLINE ONLINE rac2
  15. ora.rac2.ons application ONLINE ONLINE rac2
  16. ora.rac2.vip application ONLINE ONLINE rac2

crs版本:

   
   
  1. [root@rac1 ~]# crsctl query crs softwareversion
  2. CRS software version on node [rac1] is [10.2.0.5.0]

数据库版本:

    
    
  1. [oracle@rac1 ~]$ sqlplus / as sysdba
  2. SQL*Plus:Release10.2.0.5.0-Production on TueNov2219:07:552016
  3. Copyright(c)1982,2010,Oracle.AllRightsReserved.
  4. Connected to:
  5. OracleDatabase10gEnterpriseEditionRelease10.2.0.5.0-64bitProduction
  6. With the Partitioning,RealApplicationClusters, OLAP,DataMining
  7. and RealApplicationTesting options
  8. SQL> select * from v$version;
  9. BANNER
  10. ----------------------------------------------------------------
  11. OracleDatabase10gEnterpriseEditionRelease10.2.0.5.0-64bi
  12. PL/SQL Release10.2.0.5.0-Production
  13. CORE 10.2.0.5.0Production
  14. TNS forLinux:Version10.2.0.5.0-Production
  15. NLSRTL Version10.2.0.5.0Production

Ocr检查:

     
     
  1. [oracle@rac1 ~]$ ocrcheck
  2. Status of OracleClusterRegistry is as follows :
  3. Version:2
  4. Total space (kbytes):1043916
  5. Used space (kbytes):3848
  6. Available space (kbytes):1040068
  7. ID :1371096888
  8. Device/FileName:/dev/raw/raw1
  9. Device/File integrity check succeeded
  10. Device/File not configured
  11. Cluster registry integrity check succeeded

表决盘检查:

      
      
  1. [oracle@rac1 ~]$ crsctl query css votedisk
  2. 0.0/dev/raw/raw2
  3. located 1 votedisk(s).

检查软件包安装

检查/etc/security/limit.conf

检查/etc/sysctl.conf

在hosts中添加scan ip信息:

       
       
  1. [root@rac1 ~]# cat /etc/hosts
  2. #Do not remove the following line, or various programs
  3. # that require network functionality will fail.
  4. 127.0.0.1 localhost.localdomain localhost
  5. ::1 localhost6.localdomain6 localhost6
  6. 192.168.56.110 rac1
  7. 192.168.56.111 rac2
  8. 192.168.56.112 rac1-vip
  9. 192.168.56.113 rac2-vip
  10. 172.16.8.1 rac1-priv
  11. 172.16.8.2 rac2-priv
  12. 192.168.56.115 rac-scan

查看oracle用户信息:

   
   
  1. [oracle@rac1 ~]$ id oracle
  2. uid=500(oracle) gid=500(oinstall) groups=500(oinstall),501(dba)
由于是升级到11.2.0.4,grid的安装用户也使用oracle来安装,但是需要相应的用户组,手工来创建:

在所有节点修改oracle用户组添加asmadmin,asmdba,asmoper,oper组

    
    
  1. groupadd -g 1020 asmadmin
  2. groupadd -g 1021 asmdba
  3. groupadd -g 1022 asmoper
  4. groupadd -g 1032 oper
  5. usermod -g oinstall -G dba,oper,asmadmin,asmdba,asmoper oracle
查看修改后的oracle用户组:
    
    
  1. [root@rac1 ~]# id oracle
  2. uid=500(oracle) gid=500(oinstall) groups=500(oinstall),501(dba),1020(asmadmin),1021(asmdba),1022(asmoper),1032(oper)

在所有节点创建11g安装文件目录

     
     
  1. mkdir -p /u01/11.2.0/grid
  2. chown -R oracle:oinstall /u01/11.2.0/grid
  3. chmod -R 775/u01/11.2.0/grid
  4. mkdir -p /u01/app/oracle/product/11.2.0/db_1
  5. chown -R oracle:oinstall /u01/app/oracle/product/11.2.0/db_1
  6. chmod -R 775/u01/app/oracle/product/11.2.0/db_1

在所有节点添加oracle和grid的环境变量:

vi ~/.bash_profile

添加以下的别名:

      
      
  1. alias ora="export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
  2. export PATH=/u01/app/oracle/product/11.2.0/db_1/bin:$CRS_HOME/bin:$PATH:$HOME/bin
  3. export ORACLE_SID=rac1"
  4. alias grid="export ORACLE_HOME=/u01/11.2.0/grid
  5. export PATH=/u01/11.2.0/grid/bin:$CRS_HOME/bin:$PATH:$HOME/bin
  6. export ORACLE_SID=+ASM1"

安装cvudisk包,用来检测oracle环境

       
       
  1. [root@rac1 ~]# cd /home/oracle/grid/rpm/
  2. [root@rac1 rpm]# ls
  3. cvuqdisk-1.0.9-1.rpm
  4. [root@rac1 rpm]# rpm -ivh cvuqdisk-1.0.9-1.rpm
  5. Preparing...###########################################[100%]
  6. 1:cvuqdisk ###########################################[100%]
  7. [oracle@rac1 grid]$ pwd
  8. /home/oracle/grid
执行runcluvfy.sh来检查升级环境:
        
        
  1. ./runcluvfy.sh stage -pre crsinst -upgrade -n rac1,rac2 -rolling -src_crshome /u01/app/oracle/product/10.2.0/crs -dest_crshome /u01/11.2.0/grid -dest_version 11.2.0.4.0-fixup -fixupdir /home/oracle/fixupscript verbose
         
         
  1. [oracle@rac1 grid]$ ./runcluvfy.sh stage -pre crsinst -upgrade -n rac1,rac2 -rolling -src_crshome /u01/app/oracle/product/10.2.0/crs -dest_crshome /u01/11.2.0/grid -dest_version 11.2.0.4.0-fixup -fixupdir /home/oracle/fixupscript -verbose
  2. Performing pre-checks for cluster services setup
  3. Checking node reachability...
  4. Check:Node reachability from node "rac1"
  5. DestinationNodeReachable?
  6. ------------------------------------------------------------
  7. rac2 yes
  8. rac1 yes
  9. Result:Node reachability check passed from node "rac1"
  10. Checking user equivalence...
  11. Check:User equivalence for user "oracle"
  12. NodeNameStatus
  13. ------------------------------------------------------------
  14. rac2 passed
  15. rac1 passed
  16. Result:User equivalence check passed for user "oracle"
  17. Checking CRS user consistency
  18. Result: CRS user consistency check successful
  19. Checking node connectivity...
  20. Checking hosts config file...
  21. NodeNameStatus
  22. ------------------------------------------------------------
  23. rac2 passed
  24. rac1 passed
  25. Verification of the hosts config file successful
  26. Interface information for node "rac2"
  27. Name IP AddressSubnetGatewayDef.Gateway HW Address MTU
  28. -----------------------------------------------------------------------------------------
  29. eth0 192.168.56.110192.168.56.00.0.0.0192.168.56.108:00:27:C7:64:121500
  30. eth1 172.16.8.1172.16.8.00.0.0.0192.168.56.108:00:27:BE:FE:151500
  31. Interface information for node "rac1"
  32. Name IP AddressSubnetGatewayDef.Gateway HW Address MTU
  33. -----------------------------------------------------------------------------------------
  34. eth0 192.168.56.110192.168.56.00.0.0.0192.168.56.108:00:27:C7:64:121500
  35. eth1 172.16.8.1172.16.8.00.0.0.0192.168.56.108:00:27:BE:FE:151500
  36. Check:Node connectivity for interface "eth0"
  37. SourceDestinationConnected?
  38. ----------------------------------------------------------------------------
  39. rac2[192.168.56.110] rac1[192.168.56.110] yes
  40. Result:Node connectivity passed for interface "eth0"
  41. Check: TCP connectivity of subnet "192.168.56.0"
  42. SourceDestinationConnected?
  43. ----------------------------------------------------------------------------
  44. rac1:192.168.56.110 rac2:192.168.56.110 passed
  45. Result: TCP connectivity check passed for subnet "192.168.56.0"
  46. Check:Node connectivity for interface "eth1"
  47. SourceDestinationConnected?
  48. ----------------------------------------------------------------------------
  49. rac2[172.16.8.1] rac1[172.16.8.1] yes
  50. Result:Node connectivity passed for interface "eth1"
  51. Check: TCP connectivity of subnet "172.16.8.0"
  52. SourceDestinationConnected?
  53. ----------------------------------------------------------------------------
  54. rac1:172.16.8.1 rac2:172.16.8.1 passed
  55. Result: TCP connectivity check passed for subnet "172.16.8.0"
  56. Checking subnet mask consistency...
  57. Subnet mask consistency check passed for subnet "192.168.56.0".
  58. Subnet mask consistency check passed for subnet "172.16.8.0".
  59. Subnet mask consistency check passed.
  60. Result:Node connectivity check passed
  61. Checking multicast communication...
  62. Checking subnet "192.168.56.0"for multicast communication with multicast group "230.0.1.0"...
  63. Check of subnet "192.168.56.0"for multicast communication with multicast group "230.0.1.0" passed.
  64. Checking subnet "172.16.8.0"for multicast communication with multicast group "230.0.1.0"...
  65. Check of subnet "172.16.8.0"for multicast communication with multicast group "230.0.1.0" passed.
  66. Check of multicast communication passed.
  67. Checking OCR integrity...
  68. Checkfor compatible storage device for OCR location "/dev/raw/raw1"...
  69. Checking OCR device "/dev/raw/raw1"for sharedness...
  70. OCR device "/dev/raw/raw1" is shared...
  71. Checking size of the OCR location "/dev/raw/raw1"...
  72. Size check for OCR location "/dev/raw/raw1" successful...
  73. OCR integrity check passed
  74. CheckingASMLib configuration.
  75. NodeNameStatus
  76. ------------------------------------------------------------
  77. rac2 passed
  78. rac1 passed
  79. Result:CheckforASMLib configuration passed.
  80. Check:Total memory
  81. NodeNameAvailableRequiredStatus
  82. ----------------------------------------------------------------------
  83. rac2 1.4264GB(1495664.0KB)1.5GB(1572864.0KB) failed
  84. rac1 1.4264GB(1495664.0KB)1.5GB(1572864.0KB) failed
  85. Result:Total memory check failed
  86. Check:Available memory
  87. NodeNameAvailableRequiredStatus
  88. ----------------------------------------------------------------------
  89. rac2 785.3516MB(804200.0KB)50MB(51200.0KB) passed
  90. rac1 785.3516MB(804200.0KB)50MB(51200.0KB) passed
  91. Result:Available memory check passed
  92. Check:Swap space
  93. NodeNameAvailableRequiredStatus
  94. ----------------------------------------------------------------------
  95. rac2 2.875GB(3014648.0KB)1.5GB(1572864.0KB) passed
  96. rac1 2.875GB(3014648.0KB)1.5GB(1572864.0KB) passed
  97. Result:Swap space check passed
  98. Check:Free disk space for"rac2:/u01/11.2.0/grid,rac2:/tmp"
  99. PathNodeNameMount point AvailableRequiredStatus
  100. ----------------------------------------------------------------------------
  101. /u01/11.2.0/grid rac2 UNKNOWN NOTAVAIL 7.5GB failed
  102. /tmp rac2 UNKNOWN NOTAVAIL 7.5GB failed
  103. Result:Free disk space check failed for"rac2:/u01/11.2.0/grid,rac2:/tmp"
  104. Check:Free disk space for"rac1:/u01/11.2.0/grid,rac1:/tmp"
  105. PathNodeNameMount point AvailableRequiredStatus
  106. ----------------------------------------------------------------------------
  107. /u01/11.2.0/grid rac1 /10.1629GB7.5GB passed
  108. /tmp rac1 /10.1629GB7.5GB passed
  109. Result:Free disk space check passed for"rac1:/u01/11.2.0/grid,rac1:/tmp"
  110. Check:User existence for"oracle"
  111. NodeNameStatusComment
  112. ------------------------------------------------------------
  113. rac2 passed exists(500)
  114. rac1 passed exists(500)
  115. Checkingfor multiple users with UID value 500
  116. Result:Checkfor multiple users with UID value 500 passed
  117. Result:User existence check passed for"oracle"
  118. Check:Group existence for"oinstall"
  119. NodeNameStatusComment
  120. ------------------------------------------------------------
  121. rac2 passed exists
  122. rac1 passed exists
  123. Result:Group existence check passed for"oinstall"
  124. Check:Membership of user "oracle" in group "oinstall"[as Primary]
  125. NodeNameUserExistsGroupExistsUser in GroupPrimaryStatus
  126. ----------------------------------------------------------------------------
  127. rac2 yes yes yes yes passed
  128. rac1 yes yes yes yes passed
  129. Result:Membership check for user "oracle" in group "oinstall"[as Primary] passed
  130. Check:Run level
  131. NodeName run level RequiredStatus
  132. ----------------------------------------------------------------------
  133. rac2 33,5 passed
  134. rac1 33,5 passed
  135. Result:Run level check passed
  136. Check:Hard limits for"maximum open file descriptors"
  137. NodeNameTypeAvailableRequiredStatus
  138. --------------------------------------------------------------------
  139. rac2 hard 6553665536 passed
  140. rac1 hard 6553665536 passed
  141. Result:Hard limits check passed for"maximum open file descriptors"
  142. Check:Soft limits for"maximum open file descriptors"
  143. NodeNameTypeAvailableRequiredStatus
  144. --------------------------------------------------------------------
  145. rac2 soft 10241024 passed
  146. rac1 soft 10241024 passed
  147. Result:Soft limits check passed for"maximum open file descriptors"
  148. Check:Hard limits for"maximum user processes"
  149. NodeNameTypeAvailableRequiredStatus
  150. --------------------------------------------------------------------
  151. rac2 hard 1638416384 passed
  152. rac1 hard 1638416384 passed
  153. Result:Hard limits check passed for"maximum user processes"
  154. Check:Soft limits for"maximum user processes"
  155. NodeNameTypeAvailableRequiredStatus
  156. --------------------------------------------------------------------
  157. rac2 soft 163842047 passed
  158. rac1 soft 163842047 passed
  159. Result:Soft limits check passed for"maximum user processes"
  160. CheckingforOracle patch "14617909" in home "/u01/app/oracle/product/10.2.0/crs".
  161. NodeNameAppliedRequiredComment
  162. ----------------------------------------------------------------------
  163. rac2 missing 14617909 failed
  164. rac1 missing 14617909 failed
  165. Result:CheckforOracle patch "14617909" in home "/u01/app/oracle/product/10.2.0/crs" failed
  166. There are no oracle patches required for home "/u01/11.2.0/grid".
  167. Check:System architecture
  168. NodeNameAvailableRequiredStatus
  169. ----------------------------------------------------------------------
  170. rac2 x86_64 x86_64 passed
  171. rac1 x86_64 x86_64 passed
  172. Result:System architecture check passed
  173. Check:Kernel version
  174. NodeNameAvailableRequiredStatus
  175. ----------------------------------------------------------------------
  176. rac2 2.6.32-200.13.1.el5uek2.6.18 passed
  177. rac1 2.6.32-200.13.1.el5uek2.6.18 passed
  178. Result:Kernel version check passed
  179. Check:Kernel parameter for"semmsl"
  180. NodeNameCurrentConfiguredRequiredStatusComment
  181. ----------------------------------------------------------------------------
  182. rac2 250250250 passed
  183. rac1 250250250 passed
  184. Result:Kernel parameter check passed for"semmsl"
  185. Check:Kernel parameter for"semmns"
  186. NodeNameCurrentConfiguredRequiredStatusComment
  187. ----------------------------------------------------------------------------
  188. rac2 320003200032000 passed
  189. rac1 320003200032000 passed
  190. Result:Kernel parameter check passed for"semmns"
  191. Check:Kernel parameter for"semopm"
  192. NodeNameCurrentConfiguredRequiredStatusComment
  193. ----------------------------------------------------------------------------
  194. rac2 100100100 passed
  195. rac1 100100100 passed
  196. Result:Kernel parameter check passed for"semopm"
  197. Check:Kernel parameter for"semmni"
  198. NodeNameCurrentConfiguredRequiredStatusComment
  199. ----------------------------------------------------------------------------
  200. rac2 128128128 passed
  201. rac1 128128128 passed
  202. Result:Kernel parameter check passed for"semmni"
  203. Check:Kernel parameter for"shmmax"
  204. NodeNameCurrentConfiguredRequiredStatusComment
  205. ----------------------------------------------------------------------------
  206. rac2 43980465111044398046511104765779968 passed
  207. rac1 43980465111044398046511104765779968 passed
  208. Result:Kernel parameter check passed for"shmmax"
  209. Check:Kernel parameter for"shmmni"
  210. NodeNameCurrentConfiguredRequiredStatusComment
  211. ----------------------------------------------------------------------------
  212. rac2 409640964096 passed
  213. rac1 409640964096 passed
  214. Result:Kernel parameter check passed for"shmmni"
  215. Check:Kernel parameter for"shmall"
  216. NodeNameCurrentConfiguredRequiredStatusComment
  217. ----------------------------------------------------------------------------
  218. rac2 429496729642949672962097152 passed
  219. rac1 429496729642949672962097152 passed
  220. Result:Kernel parameter check passed for"shmall"
  221. Check:Kernel parameter for"file-max"
  222. NodeNameCurrentConfiguredRequiredStatusComment
  223. ----------------------------------------------------------------------------
  224. rac2 681574468157446815744 passed
  225. rac1 681574468157446815744 passed
  226. Result:Kernel parameter check passed for"file-max"
  227. Check:Kernel parameter for"ip_local_port_range"
  228. NodeNameCurrentConfiguredRequiredStatusComment
  229. ----------------------------------------------------------------------------
  230. rac2 between 9000.0&65500.0 between 9000.0&65500.0 between 9000.0&65500.0 passed
  231. rac1 between 9000.0&65500.0 between 9000.0&65500.0 between 9000.0&65500.0 passed
  232. Result:Kernel parameter check passed for"ip_local_port_range"
  233. Check:Kernel parameter for"rmem_default"
  234. NodeNameCurrentConfiguredRequiredStatusComment
  235. ----------------------------------------------------------------------------
  236. rac2 10485761048576262144 passed
  237. rac1 10485761048576262144 passed
  238. Result:Kernel parameter check passed for"rmem_default"
  239. Check:Kernel parameter for"rmem_max"
  240. NodeNameCurrentConfiguredRequiredStatusComment
  241. ----------------------------------------------------------------------------
  242. rac2 419430441943044194304 passed
  243. rac1 419430441943044194304 passed
  244. Result:Kernel parameter check passed for"rmem_max"
  245. Check:Kernel parameter for"wmem_default"
  246. NodeNameCurrentConfiguredRequiredStatusComment
  247. ----------------------------------------------------------------------------
  248. rac2 262144262144262144 passed
  249. rac1 262144262144262144 passed
  250. Result:Kernel parameter check passed for"wmem_default"
  251. Check:Kernel parameter for"wmem_max"
  252. NodeNameCurrentConfiguredRequiredStatusComment
  253. ----------------------------------------------------------------------------
  254. rac2 104857610485761048576 passed
  255. rac1 104857610485761048576 passed
  256. Result:Kernel parameter check passed for"wmem_max"
  257. Check:Kernel parameter for"aio-max-nr"
  258. NodeNameCurrentConfiguredRequiredStatusComment
  259. ----------------------------------------------------------------------------
  260. rac2 104857610485761048576 passed
  261. rac1 104857610485761048576 passed
  262. Result:Kernel parameter check passed for"aio-max-nr"
  263. Check:Package existence for"make"
  264. NodeNameAvailableRequiredStatus
  265. ----------------------------------------------------------------------
  266. rac2 make-3.81-3.el5 make-3.81 passed
  267. rac1 make-3.81-3.el5 make-3.81 passed
  268. Result:Package existence check passed for"make"
  269. Check:Package existence for"binutils"
  270. NodeNameAvailableRequiredStatus
  271. ----------------------------------------------------------------------
  272. rac2 binutils-2.17.50.0.6-14.el5 binutils-2.17.50.0.6 passed
  273. rac1 binutils-2.17.50.0.6-14.el5 binutils-2.17.50.0.6 passed
  274. Result:Package existence check passed for"binutils"
  275. Check:Package existence for"gcc(x86_64)"
  276. NodeNameAvailableRequiredStatus
  277. ----------------------------------------------------------------------
  278. rac2 gcc(x86_64)-4.1.2-51.el5 gcc(x86_64)-4.1.2 passed
  279. rac1 gcc(x86_64)-4.1.2-51.el5 gcc(x86_64)-4.1.2 passed
  280. Result:Package existence check passed for"gcc(x86_64)"
  281. Check:Package existence for"libaio(x86_64)"
  282. NodeNameAvailableRequiredStatus
  283. ----------------------------------------------------------------------
  284. rac2 libaio(x86_64)-0.3.106-5 libaio(x86_64)-0.3.106 passed
  285. rac1 libaio(x86_64)-0.3.106-5 libaio(x86_64)-0.3.106 passed
  286. Result:Package existence check passed for"libaio(x86_64)"
  287. Check:Package existence for"glibc(x86_64)"
  288. NodeNameAvailableRequiredStatus
  289. ----------------------------------------------------------------------
  290. rac2 glibc(x86_64)-2.5-65 glibc(x86_64)-2.5-24 passed
  291. rac1 glibc(x86_64)-2.5-65 glibc(x86_64)-2.5-24 passed
  292. Result:Package existence check passed for"glibc(x86_64)"
  293. Check:Package existence for"compat-libstdc++-33(x86_64)"
  294. NodeNameAvailableRequiredStatus
  295. ----------------------------------------------------------------------
  296. rac2 compat-libstdc++-33(x86_64)-3.2.3-61 compat-libstdc++-33(x86_64)-3.2.3 passed
  297. rac1 compat-libstdc++-33(x86_64)-3.2.3-61 compat-libstdc++-33(x86_64)-3.2.3 passed
  298. Result:Package existence check passed for"compat-libstdc++-33(x86_64)"
  299. Check:Package existence for"elfutils-libelf(x86_64)"
  300. NodeNameAvailableRequiredStatus
  301. ----------------------------------------------------------------------
  302. rac2 elfutils-libelf(x86_64)-0.137-3.el5 elfutils-libelf(x86_64)-0.125 passed
  303. rac1 elfutils-libelf(x86_64)-0.137-3.el5 elfutils-libelf(x86_64)-0.125 passed
  304. Result:Package existence check passed for"elfutils-libelf(x86_64)"
  305. Check:Package existence for"elfutils-libelf-devel"
  306. NodeNameAvailableRequiredStatus
  307. ----------------------------------------------------------------------
  308. rac2 elfutils-libelf-devel-0.137-3.el5 elfutils-libelf-devel-0.125 passed
  309. rac1 elfutils-libelf-devel-0.137-3.el5 elfutils-libelf-devel-0.125 passed
  310. Result:Package existence check passed for"elfutils-libelf-devel"
  311. Check:Package existence for"glibc-common"
  312. NodeNameAvailableRequiredStatus
  313. ----------------------------------------------------------------------
  314. rac2 glibc-common-2.5-65 glibc-common-2.5 passed
  315. rac1 glibc-common-2.5-65 glibc-common-2.5 passed
  316. Result:Package existence check passed for"glibc-common"
  317. Check:Package existence for"glibc-devel(x86_64)"
  318. NodeNameAvailableRequiredStatus
  319. ----------------------------------------------------------------------
  320. rac2 glibc-devel(x86_64)-2.5-65 glibc-devel(x86_64)-2.5 passed
  321. rac1 glibc-devel(x86_64)-2.5-65 glibc-devel(x86_64)-2.5 passed
  322. Result:Package existence check passed for"glibc-devel(x86_64)"
  323. Check:Package existence for"glibc-headers"
  324. NodeNameAvailableRequiredStatus
  325. ----------------------------------------------------------------------
  326. rac2 glibc-headers-2.5-65 glibc-headers-2.5 passed
  327. rac1 glibc-headers-2.5-65 glibc-headers-2.5 passed
  328. Result:Package existence check passed for"glibc-headers"
  329. Check:Package existence for"gcc-c++(x86_64)"
  330. NodeNameAvailableRequiredStatus
  331. ----------------------------------------------------------------------
  332. rac2 gcc-c++(x86_64)-4.1.2-51.el5 gcc-c++(x86_64)-4.1.2 passed
  333. rac1 gcc-c++(x86_64)-4.1.2-51.el5 gcc-c++(x86_64)-4.1.2 passed
  334. Result:Package existence check passed for"gcc-c++(x86_64)"
  335. Check:Package existence for"libaio-devel(x86_64)"
  336. NodeNameAvailableRequiredStatus
  337. ----------------------------------------------------------------------
  338. rac2 libaio-devel(x86_64)-0.3.106-5 libaio-devel(x86_64)-0.3.106 passed
  339. rac1 libaio-devel(x86_64)-0.3.106-5 libaio-devel(x86_64)-0.3.106 passed
  340. Result:Package existence check passed for"libaio-devel(x86_64)"
  341. Check:Package existence for"libgcc(x86_64)"
  342. NodeNameAvailableRequiredStatus
  343. ----------------------------------------------------------------------
  344. rac2 libgcc(x86_64)-4.1.2-51.el5 libgcc(x86_64)-4.1.2 passed
  345. rac1 libgcc(x86_64)-4.1.2-51.el5 libgcc(x86_64)-4.1.2 passed
  346. Result:Package existence check passed for"libgcc(x86_64)"
  347. Check:Package existence for"libstdc++(x86_64)"
  348. NodeNameAvailableRequiredStatus
  349. ----------------------------------------------------------------------
  350. rac2 libstdc++(x86_64)-4.1.2-51.el5 libstdc++(x86_64)-4.1.2 passed
  351. rac1 libstdc++(x86_64)-4.1.2-51.el5 libstdc++(x86_64)-4.1.2 passed
  352. Result:Package existence check passed for"libstdc++(x86_64)"
  353. Check:Package existence for"libstdc++-devel(x86_64)"
  354. NodeNameAvailableRequiredStatus
  355. ----------------------------------------------------------------------
  356. rac2 libstdc++-devel(x86_64)-4.1.2-51.el5 libstdc++-devel(x86_64)-4.1.2 passed
  357. rac1 libstdc++-devel(x86_64)-4.1.2-51.el5 libstdc++-devel(x86_64)-4.1.2 passed
  358. Result:Package existence check passed for"libstdc++-devel(x86_64)"
  359. Check:Package existence for"sysstat"
  360. NodeNameAvailableRequiredStatus
  361. ----------------------------------------------------------------------
  362. rac2 sysstat-7.0.2-11.el5 sysstat-7.0.2 passed
  363. rac1 sysstat-7.0.2-11.el5 sysstat-7.0.2 passed
  364. Result:Package existence check passed for"sysstat"
  365. Check:Package existence for"ksh"
  366. NodeNameAvailableRequiredStatus
  367. ----------------------------------------------------------------------
  368. rac2 ksh-20100202-1.el5_6.6 ksh-20060214 passed
  369. rac1 ksh-20100202-1.el5_6.6 ksh-20060214 passed
  370. Result:Package existence check passed for"ksh"
  371. Checkingfor multiple users with UID value 0
  372. Result:Checkfor multiple users with UID value 0 passed
  373. Check:Current group ID
  374. Result:Current group ID check passed
  375. Starting check for consistency of primary group of root user
  376. NodeNameStatus
  377. ------------------------------------------------------------
  378. rac2 passed
  379. rac1 passed
  380. Checkfor consistency of root user's primary group passed
  381. Check:Package existence for"cvuqdisk"
  382. NodeNameAvailableRequiredStatus
  383. ----------------------------------------------------------------------
  384. rac2 cvuqdisk-1.0.9-1 cvuqdisk-1.0.9-1 passed
  385. rac1 cvuqdisk-1.0.9-1 cvuqdisk-1.0.9-1 passed
  386. Result:Package existence check passed for"cvuqdisk"
  387. StartingClock synchronization checks using NetworkTimeProtocol(NTP)...
  388. NTP Configuration file check started...
  389. The NTP configuration file "/etc/ntp.conf" is available on all nodes
  390. NTP Configuration file check passed
  391. No NTP Daemons or Services were found to be running
  392. PRVF-5507: NTP daemon or service is not running on any node but NTP configuration file exists on the following node(s):
  393. rac2,rac1
  394. Result:Clock synchronization check using NetworkTimeProtocol(NTP) failed
  395. CheckingCore file name pattern consistency...
  396. Core file name pattern consistency check passed.
  397. Checking to make sure user "oracle" is not in "root" group
  398. NodeNameStatusComment
  399. ------------------------------------------------------------
  400. rac2 passed does not exist
  401. rac1 passed does not exist
  402. Result:User"oracle" is not part of "root" group.Check passed
  403. Checkdefault user file creation mask
  404. NodeNameAvailableRequiredComment
  405. ----------------------------------------------------------------------
  406. rac2 00220022 passed
  407. rac1 00220022 passed
  408. Result:Default user file creation mask check passed
  409. Checking consistency of file "/etc/resolv.conf" across nodes
  410. Checking the file "/etc/resolv.conf" to make sure only one of domain and search entries is defined
  411. File"/etc/resolv.conf" does not have both domain and search entries defined
  412. Checkingif domain entry in file "/etc/resolv.conf" is consistent across the nodes...
  413. domain entry in file "/etc/resolv.conf" is consistent across nodes
  414. Checkingif search entry in file "/etc/resolv.conf" is consistent across the nodes...
  415. search entry in file "/etc/resolv.conf" is consistent across nodes
  416. Checking DNS response time for an unreachable node
  417. NodeNameStatus
  418. ------------------------------------------------------------
  419. rac2 failed
  420. rac1 failed
  421. PRVF-5636:The DNS response time for an unreachable node exceeded "15000" ms on following nodes: rac2,rac1
  422. File"/etc/resolv.conf" is not consistent across nodes
  423. UDev attributes check for OCR locations started...
  424. Checking udev settings for device "/dev/raw/raw1"
  425. DeviceOwnerGroupPermissionsResult
  426. --------------------------------------------------------------------
  427. raw1 root oinstall 0640 passed
  428. raw1 root oinstall 0640 passed
  429. Result:UDev attributes check passed for OCR locations
  430. UDev attributes check forVotingDisk locations started...
  431. Checking udev settings for device "/dev/raw/raw2"
  432. DeviceOwnerGroupPermissionsResult
  433. --------------------------------------------------------------------
  434. raw2 oracle oinstall 0640 passed
  435. raw2 oracle oinstall 0640 passed
  436. Result:UDev attributes check passed forVotingDisk locations
  437. Check:Time zone consistency
  438. Result:Time zone consistency check passed
  439. Checking VIP configuration.
  440. Checking VIP Subnet configuration.
  441. Checkfor VIP Subnet configuration passed.
  442. Checking VIP reachability
  443. Checkfor VIP reachability passed.
  444. CheckingOracleClusterVotingDisk configuration...
  445. ERROR:
  446. PRVF-5449:Check of VotingDisk location "/dev/raw/raw2(/dev/raw/raw2)" failed on the following nodes:
  447. rac2
  448. rac2:GetFileInfo command failed.
  449. PRVF-5431:OracleClusterVotingDisk configuration check failed
  450. Clusterware version consistency passed
  451. Pre-check for cluster services setup was unsuccessful on all the nodes.
这里通过检查主要发现以下几个问题:
1.缺少软件包libaio-devel x64和i686版本.缺少sysstat软件包
2.缺少path 14617909.通过查看官网发现缺少也没事,最后也会升级成功,只是在升级过程中一个节点可能会重启

The issue will cause a node reboot, however, rootupgrade.sh will succeed and upgrade will finish successfully.

To avoid the node eviction, patch 14617909 should be applied to 10.2.0.5 CRS home prior to running rootupgrade.sh

3.ocr和vote的用户权限不对,分别应该是root:oinstall 640和oracle:oinstall 640

使用oracle来安装,切换到grid的环境变量
    
    
  1. [oracle@rac1 grid]$ grid
  2. [oracle@rac1 grid]$ env |grep ORACLE
  3. ORACLE_SID=+ASM1
  4. ORACLE_BASE=/u01/app
  5. ORACLE_HOME=/u01/11.2.0/grid
执行runInstaller开始进行安装:

这里选择第二项:Upgrade Oracle Grid Infrastucture or Oracle Automatic Storage Management

 
 
 选择yes继续
 
 
 
 
 
 
 
点击install开始进行安装.
 

分别在节点1和节点2执行rootupgrade.sh.节点1信息:
     
     
  1. [root@rac1 ~]#/u01/11.2.0/grid/rootupgrade.sh
  2. Performing root user operation forOracle11g
  3. The following environment variables are set as:
  4. ORACLE_OWNER= oracle
  5. ORACLE_HOME=/u01/11.2.0/grid
  6. Enter the full pathname of the local bin directory:[/usr/local/bin]:
  7. The file "dbhome" already exists in /usr/local/bin.Overwrite it?(y/n)
  8. [n]: y
  9. Copying dbhome to /usr/local/bin ...
  10. The file "oraenv" already exists in /usr/local/bin.Overwrite it?(y/n)
  11. [n]: y
  12. Copying oraenv to /usr/local/bin ...
  13. The file "coraenv" already exists in /usr/local/bin.Overwrite it?(y/n)
  14. [n]: y
  15. Copying coraenv to /usr/local/bin ...
  16. Entries will be added to the /etc/oratab file as needed by
  17. DatabaseConfigurationAssistant when a database is created
  18. Finished running generic part of root script.
  19. Now product-specific root actions will be performed.
  20. Using configuration parameter file:/u01/11.2.0/grid/crs/install/crsconfig_params
  21. Creating trace directory
  22. User ignored Prerequisites during installation
  23. InstallingTraceFileAnalyzer
  24. OLR initialization - successful
  25. root wallet
  26. root wallet cert
  27. root cert export
  28. peer wallet
  29. profile reader wallet
  30. pa wallet
  31. peer wallet keys
  32. pa wallet keys
  33. peer cert request
  34. pa cert request
  35. peer cert
  36. pa cert
  37. peer root cert TP
  38. profile reader root cert TP
  39. pa root cert TP
  40. peer pa cert TP
  41. pa peer cert TP
  42. profile reader pa cert TP
  43. profile reader peer cert TP
  44. peer user cert
  45. pa user cert
  46. ReplacingClusterware entries in inittab
  47. clscfg: EXISTING configuration version 3 detected.
  48. clscfg: version 3 is 10GRelease2.
  49. Successfully accumulated necessary OCR keys.
  50. Creating OCR keys for user 'root', privgrp 'root'..
  51. Operation successful.
  52. ConfigureOracleGridInfrastructurefor a Cluster... succeeded
节点2信息:
    
    
  1. [root@rac2 ~]#/u01/11.2.0/grid/rootupgrade.sh
  2. Performing root user operation forOracle11g
  3. The following environment variables are set as:
  4. ORACLE_OWNER= oracle
  5. ORACLE_HOME=/u01/11.2.0/grid
  6. Enter the full pathname of the local bin directory:[/usr/local/bin]:
  7. The file "dbhome" already exists in /usr/local/bin.Overwrite it?(y/n)
  8. [n]: y
  9. Copying dbhome to /usr/local/bin ...
  10. The file "oraenv" already exists in /usr/local/bin.Overwrite it?(y/n)
  11. [n]: y
  12. Copying oraenv to /usr/local/bin ...
  13. The file "coraenv" already exists in /usr/local/bin.Overwrite it?(y/n)
  14. [n]: y
  15. Copying coraenv to /usr/local/bin ...
  16. Entries will be added to the /etc/oratab file as needed by
  17. DatabaseConfigurationAssistant when a database is created
  18. Finished running generic part of root script.
  19. Now product-specific root actions will be performed.
  20. Using configuration parameter file:/u01/11.2.0/grid/crs/install/crsconfig_params
  21. Creating trace directory
  22. User ignored Prerequisites during installation
  23. InstallingTraceFileAnalyzer
  24. OLR initialization - successful
  25. ReplacingClusterware entries in inittab
  26. clscfg: EXISTING configuration version 5 detected.
  27. clscfg: version 5 is 11gRelease2.
  28. Successfully accumulated necessary OCR keys.
  29. Creating OCR keys for user 'root', privgrp 'root'..
  30. Operation successful.
  31. Start upgrade invoked..
  32. Started to upgrade the OracleClusterware.This operation may take a few minutes.
  33. Started to upgrade the OCR.
  34. Started to upgrade the CSS.
  35. Started to upgrade the CRS.
  36. The CRS was successfully upgraded.
  37. Successfully upgraded the OracleClusterware.
  38. OracleClusterware operating version was successfully set to 11.2.0.4.0
  39. ConfigureOracleGridInfrastructurefor a Cluster... succeeded
 这里有一个ntp的检查错误,无关紧要:
    
    
  1. INFO:Liveness check failed for"ntpd"
  2. INFO:Check failed on nodes:
  3. INFO: rac2,rac1
  4. INFO: PRVF-5494:The NTP Daemon or Service was not alive on all nodes
  5. INFO: PRVF-5415:Check to see if NTP daemon or service is running failed
  6. INFO:Clock synchronization check using NetworkTimeProtocol(NTP) failed
  7. INFO: PRVF-9652:ClusterTimeSynchronizationServices check failed
  8. INFO:Checking VIP configuration.
  9. INFO:Checking VIP Subnet configuration.
  10. INFO:Checkfor VIP Subnet configuration passed.
  11. INFO:Checking VIP reachability
  12. INFO:Checkfor VIP reachability passed.
  13. INFO:Post-check for cluster services setup was unsuccessful on all the nodes.

 

 

 
 此时使用11g的环境变量查看发现grid已经把asm和db的信息接管过来了.但是ocr和vote还是使用的10g的方式放在裸设备里面.
   
   
  1. [oracle@rac1 grid]$ ocrcheck
  2. Status of OracleClusterRegistry is as follows :
  3. Version:3
  4. Total space (kbytes):1039908
  5. Used space (kbytes):6228
  6. Available space (kbytes):1033680
  7. ID :1371096888
  8. Device/FileName:/dev/raw/raw1
  9. Device/File integrity check succeeded
  10. Device/File not configured
  11. Device/File not configured
  12. Device/File not configured
  13. Device/File not configured
  14. Cluster registry integrity check succeeded
  15. Logical corruption check bypassed due to non-privileged user
  16. [oracle@rac1 grid]$ env |grep PATH
  17. PATH=/u01/11.2.0/grid/bin:/u01/app/oracle/product/10.2.0/crs/bin:/u01/app/oracle/product/10.2.0/db_1/bin:/u01/app/oracle/product/10.2.0/crs/bin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/home/oracle/bin:/home/oracle/bin
  18. [oracle@rac1 grid]$ crsctl query css votedisk
  19. ## STATE FileUniversalIdFileNameDisk group
  20. ------------------------------------------
  21. 1. ONLINE 32d0508888cbcf0abf885eeeeeec5ae7(/dev/raw/raw2)[]
  22. Located1 voting disk(s).
  23. [oracle@rac1 grid]$ crsctl query crs softwareversion
  24. OracleClusterware version on node [rac1] is [11.2.0.4.0]
  25. [oracle@rac1 grid]$ crsctl stat res -t
  26. --------------------------------------------------------------------------------
  27. NAME TARGET STATE SERVER STATE_DETAILS
  28. --------------------------------------------------------------------------------
  29. LocalResources
  30. --------------------------------------------------------------------------------
  31. ora.DATA.dg
  32. ONLINE ONLINE rac1
  33. ONLINE ONLINE rac2
  34. ora.LISTENER.lsnr
  35. ONLINE ONLINE rac1
  36. ONLINE ONLINE rac2
  37. ora.asm
  38. ONLINE ONLINE rac1 Started
  39. ONLINE ONLINE rac2 Started
  40. ora.gsd
  41. OFFLINE OFFLINE rac1
  42. OFFLINE OFFLINE rac2
  43. ora.net1.network
  44. ONLINE ONLINE rac1
  45. ONLINE ONLINE rac2
  46. ora.ons
  47. ONLINE ONLINE rac1
  48. ONLINE ONLINE rac2
  49. ora.registry.acfs
  50. ONLINE ONLINE rac1
  51. ONLINE ONLINE rac2
  52. --------------------------------------------------------------------------------
  53. ClusterResources
  54. --------------------------------------------------------------------------------
  55. ora.LISTENER_SCAN1.lsnr
  56. 1 ONLINE ONLINE rac1
  57. ora.cvu
  58. 1 ONLINE ONLINE rac2
  59. ora.oc4j
  60. 1 ONLINE ONLINE rac2
  61. ora.rac.db
  62. 1 ONLINE ONLINE rac2
  63. ora.rac.rac1.inst
  64. 1 ONLINE ONLINE rac1
  65. ora.rac.rac2.inst
  66. 1 ONLINE ONLINE rac2
  67. ora.rac1.vip
  68. 1 ONLINE ONLINE rac1
  69. ora.rac2.vip
  70. 1 ONLINE ONLINE rac2
  71. ora.scan1.vip
  72. 1 ONLINE ONLINE rac1
使用10g环境查看:
   
   
  1. [root@rac1 ~]# crs_stat -t
  2. NameTypeTargetStateHost
  3. ------------------------------------------------------------
  4. ora.DATA.dg ora....up.type ONLINE ONLINE rac1
  5. ora....ER.lsnr ora....er.type ONLINE ONLINE rac1
  6. ora....N1.lsnr ora....er.type ONLINE ONLINE rac1
  7. ora.asm ora.asm.type ONLINE ONLINE rac1
  8. ora.cvu ora.cvu.type ONLINE ONLINE rac2
  9. ora.gsd ora.gsd.type OFFLINE OFFLINE
  10. ora....network ora....rk.type ONLINE ONLINE rac1
  11. ora.oc4j ora.oc4j.type ONLINE ONLINE rac2
  12. ora.ons ora.ons.type ONLINE ONLINE rac1
  13. ora.rac.db application ONLINE ONLINE rac2
  14. ora....c1.inst application ONLINE ONLINE rac1
  15. ora....c2.inst application ONLINE ONLINE rac2
  16. ora.rac1.vip ora....t1.type ONLINE ONLINE rac1
  17. ora.rac2.vip ora....t1.type ONLINE ONLINE rac2
  18. ora....ry.acfs ora....fs.type ONLINE ONLINE rac1
  19. ora.scan1.vip ora....ip.type ONLINE ONLINE rac1

注意:在刚刚升级完后,10g的环境变量请不要先清除掉,因为这个时候11gCRS集群件已经默认把Database注册到集群当中了,但是11gcrs软件的srvctl是不能将数据库停止掉的


2.迁移ocr和vote

升级完成grid之后,原来的ocr和vote还是放在裸设备中,而11g是放在asm中的,我们创建一个新的磁盘组ocr来替换原来的ocr和vote.

使用asmca来创建新的ocr磁盘组

    
    
  1. [oracle@rac1 grid]$ /u01/11.2.0/grid/bin/asmca
 
 
一开始我创建了一个1G的ocr磁盘组,发现添加的时候报错空间不够,11204的ocr空间更大一点.后来创建了一个3G的盘就可以了.
迁移ocr和vote到asm中:
使用ocrconfig -add +ocrvote来添加新的ocr盘,在用-delete把原来的删除掉:
   
   
  1. [root@rac1 ~]# cd /u01/11.2.0/grid/bin
  2. [root@rac1 bin]#./ocrconfig -add +ocrvote
  3. [root@rac1 bin]#./ocrconfig -delete/dev/raw/raw1
  4. [root@rac1 bin]#./ocrcheck
  5. Status of OracleClusterRegistry is as follows :
  6. Version:3
  7. Total space (kbytes):1039908
  8. Used space (kbytes):6252
  9. Available space (kbytes):1033656
  10. ID :1371096888
  11. Device/FileName:+ocrvote
  12. Device/File integrity check succeeded
  13. Device/File not configured
  14. Device/File not configured
  15. Device/File not configured
  16. Device/File not configured
  17. Cluster registry integrity check succeeded
  18. Logical corruption check succeeded
使用crsctl replace来将原来的vote盘从裸设备迁移到asm中:
   
   
  1. [oracle@rac1 ~]$ crsctl replace votedisk +ocrvote
  2. CRS-4256:Updating the profile
  3. Successful addition of voting disk 63b58556b1f24f0cbf17c5e690264da3.
  4. Successful deletion of voting disk 32d0508888cbcf0abf885eeeeeec5ae7.
  5. Successfully replaced voting disk group with+ocrvote.
  6. CRS-4256:Updating the profile
  7. CRS-4266:Voting file(s) successfully replaced
    
    
  1. [oracle@rac1 ~]$ crsctl query css votedisk
  2. ## STATE FileUniversalIdFileNameDisk group
  3. ------------------------------------------
  4. 1. ONLINE 63b58556b1f24f0cbf17c5e690264da3(/dev/raw/raw5)[OCRVOTE]
  5. Located1 voting disk(s).

3.升级database软件

切换到oracle下的环境变量,执行runinstller:
   
   
  1. [oracle@rac1 database]$ env |grep ORACLE
  2. ORACLE_SID=rac1
  3. ORACLE_BASE=/u01/app
  4. ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
  5. [oracle@rac1 database]$ ./runInstaller
  6. StartingOracleUniversalInstaller...
  7. CheckingTemp space: must be greater than 120 MB.Actual5896 MB Passed
  8. Checking swap space: must be greater than 150 MB.Actual2527 MB Passed
  9. Checking monitor: must be configured to display at least 256 colors.Actual16777216Passed
 
  这儿选择 Upgrade an existing database.
 
 
 
 
 
 
如果在这一步发现有些资源不对,例如节点2的vip资源在节点1上,那么需要使用srvctl relocate来将资源重新定位到本节点:
   
   
  1. [oracle@rac1 ~]$ srvctl -h|grep relocate
  2. Usage: srvctl relocate database -d <db_unique_name>{[-n <target>][-w <timeout>]|-a [-r]}[-v]
  3. Usage: srvctl relocate service -d <db_unique_name>-s <service_name>{-i <old_inst_name>-t <new_inst_name>|-c <current_node>-n <target_node>}[-f]
  4. Usage: srvctl relocate vip -i <vip_name>[-n <node_name>][-f][-v]
  5. Usage: srvctl relocate scan -i <ordinal_number>[-n <node_name>]
  6. Usage: srvctl relocate scan_listener -i <ordinal_number>[-n <node_name>]
  7. Usage: srvctl relocate server -n "<server_list>"-g <pool_name>[-f]
  8. Usage: srvctl relocate oc4j [-n <node_name>][-v]
  9. Usage: srvctl relocate gns [-n <node_name>][-v]
  10. Usage: srvctl relocate cvu [-n <node_name>]
 
 
 
 执行root.sh,节点1:
   
   
  1. [root@rac1 oracle]#/u01/app/oracle/product/11.2.0/db_1/root.sh
  2. Performing root user operation forOracle11g
  3. The following environment variables are set as:
  4. ORACLE_OWNER= oracle
  5. ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
  6. Enter the full pathname of the local bin directory:[/usr/local/bin]:
  7. The contents of "dbhome" have not changed.No need to overwrite.
  8. The contents of "oraenv" have not changed.No need to overwrite.
  9. The contents of "coraenv" have not changed.No need to overwrite.
  10. Entries will be added to the /etc/oratab file as needed by
  11. DatabaseConfigurationAssistant when a database is created
  12. Finished running generic part of root script.
  13. Now product-specific root actions will be performed.
  14. Finished product-specific root actions.
节点2:
   
   
  1. [root@rac2 ~]#/u01/app/oracle/product/11.2.0/db_1/root.sh
  2. Performing root user operation forOracle11g
  3. The following environment variables are set as:
  4. ORACLE_OWNER= oracle
  5. ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
  6. Enter the full pathname of the local bin directory:[/usr/local/bin]:
  7. The contents of "dbhome" have not changed.No need to overwrite.
  8. The contents of "oraenv" have not changed.No need to overwrite.
  9. The contents of "coraenv" have not changed.No need to overwrite.
  10. Entries will be added to the /etc/oratab file as needed by
  11. DatabaseConfigurationAssistant when a database is created
  12. Finished running generic part of root script.
  13. Now product-specific root actions will be performed.
  14. Finished product-specific root actions.

4.升级数据库数据字典

完成之后直接进入dbua界面,进行数据库的升级:
 
注意:这里建议清空一下回收站.现在的数据库还是以10G的环境打开的,所以现在进sqlplus必须是以10G的ORACLE_HOME进.
这里可以设置并行度:
 
 
 
 
 使用crsctl query查看crs的版本
     
     
  1. [oracle@rac1 database]$ crsctl|grep query
  2. crsctl query css votedisk - lists the voting disks used by CSS
  3. crsctl query crs softwareversion [<nodename>]- lists the version of CRS software installed
  4. crsctl query crs activeversion - lists the CRS software operating version
      
      
  1. [oracle@rac1 database]$ crsctl query crs softwareversion
  2. CRS software version on node [rac1] is [11.2.0.4.0]
  3. [oracle@rac1 database]$ crsctl query crs activeversion
  4. CRS active version on the cluster is [11.2.0.4.0]
  5. [oracle@rac1 database]$
       
       
  1. [oracle@rac2 ~]$ sqlplus / as sysdba
  2. SQL*Plus:Release11.2.0.4.0Production on WedNov2301:41:352016
  3. Copyright(c)1982,2013,Oracle.All rights reserved.
  4. Connected to:
  5. OracleDatabase11gEnterpriseEditionRelease11.2.0.4.0-64bitProduction
  6. With the Partitioning,RealApplicationClusters,AutomaticStorageManagement, OLAP,
  7. DataMining and RealApplicationTesting options
  8. SQL> select status from v$instance;
  9. STATUS
  10. ------------
  11. OPEN
  12. SQL> select * from v$version;
  13. BANNER
  14. --------------------------------------------------------------------------------
  15. OracleDatabase11gEnterpriseEditionRelease11.2.0.4.0-64bitProduction
  16. PL/SQL Release11.2.0.4.0-Production
  17. CORE 11.2.0.4.0Production
  18. TNS forLinux:Version11.2.0.4.0-Production
  19. NLSRTL Version11.2.0.4.0-Production
查看RAC资源状态:
     
     
  1. [oracle@rac2 ~]$ crsctl stat res -t
  2. --------------------------------------------------------------------------------
  3. NAME TARGET STATE SERVER STATE_DETAILS
  4. --------------------------------------------------------------------------------
  5. LocalResources
  6. --------------------------------------------------------------------------------
  7. ora.DATA.dg
  8. ONLINE ONLINE rac1
  9. ONLINE ONLINE rac2
  10. ora.LISTENER.lsnr
  11. ONLINE ONLINE rac1
  12. ONLINE ONLINE rac2
  13. ora.OCRVOTE.dg
  14. ONLINE ONLINE rac1
  15. ONLINE ONLINE rac2
  16. ora.asm
  17. ONLINE ONLINE rac1 Started
  18. ONLINE ONLINE rac2 Started
  19. ora.gsd
  20. OFFLINE OFFLINE rac1
  21. OFFLINE OFFLINE rac2
  22. ora.net1.network
  23. ONLINE ONLINE rac1
  24. ONLINE ONLINE rac2
  25. ora.ons
  26. ONLINE ONLINE rac1
  27. ONLINE ONLINE rac2
  28. ora.registry.acfs
  29. ONLINE ONLINE rac1
  30. ONLINE ONLINE rac2
  31. --------------------------------------------------------------------------------
  32. ClusterResources
  33. --------------------------------------------------------------------------------
  34. ora.LISTENER_SCAN1.lsnr
  35. 1 ONLINE ONLINE rac1
  36. ora.cvu
  37. 1 ONLINE ONLINE rac1
  38. ora.oc4j
  39. 1 ONLINE ONLINE rac1
  40. ora.rac.db
  41. 1 ONLINE ONLINE rac1 Open
  42. 2 ONLINE ONLINE rac2 Open
  43. ora.rac1.vip
  44. 1 ONLINE ONLINE rac1
  45. ora.rac2.vip
  46. 1 ONLINE ONLINE rac2
  47. ora.scan1.vip
  48. 1 ONLINE ONLINE rac1
  49. [oracle@rac2 ~]$
      
      
  1. 1* select COMP_NAME,VERSION,STATUS from dba_registry
  2. SQL>/
  3. COMP_NAME VERSION STATUS
  4. ---------------------------------------------------------------------------------------
  5. OracleWorkspaceManager11.2.0.4.0 VALID
  6. OracleDatabaseCatalogViews11.2.0.4.0 VALID
  7. OracleDatabasePackages and Types11.2.0.4.0 VALID
  8. OracleRealApplicationClusters11.2.0.4.0 VALID

5.移除旧的10G oracle crs和rdbms home

确保升级成功后,我们可以移除旧的ORACLE_HOME

       
       
  1. [oracle@rac1 ContentsXML]$ pwd
  2. /u01/app/oraInventory/ContentsXML
  3. [oracle@rac1 ContentsXML]$ cat inventory.xml
  4. <?xml version="1.0" standalone="yes"?>
  5. <!--Copyright(c)1999,2013,Oracle and/or its affiliates.
  6. All rights reserved.-->
  7. <!--Do not modify the contents of this file by hand.-->
  8. <INVENTORY>
  9. <VERSION_INFO>
  10. <SAVED_WITH>11.2.0.4.0</SAVED_WITH>
  11. <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
  12. </VERSION_INFO>
  13. <HOME_LIST>
  14. <HOME NAME="OraCrs10g_home" LOC="/u01/app/oracle/product/10.2.0/crs" TYPE="O" IDX="1">
  15. <NODE_LIST>
  16. <NODE NAME="rac1"/>
  17. <NODE NAME="rac2"/>
  18. </NODE_LIST>
  19. </HOME>
  20. <HOME NAME="OraDb10g_home1" LOC="/u01/app/oracle/product/10.2.0/db_1" TYPE="O" IDX="2">
  21. <NODE_LIST>
  22. <NODE NAME="rac1"/>
  23. <NODE NAME="rac2"/>
  24. </NODE_LIST>
  25. </HOME>
  26. <HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/11.2.0/grid" TYPE="O" IDX="3" CRS="true">
  27. <NODE_LIST>
  28. <NODE NAME="rac1"/>
  29. <NODE NAME="rac2"/>
  30. </NODE_LIST>
  31. </HOME>
  32. <HOME NAME="OraDb11g_home1" LOC="/u01/app/oracle/product/11.2.0/db_1" TYPE="O" IDX="4">
  33. <NODE_LIST>
  34. <NODE NAME="rac1"/>
  35. <NODE NAME="rac2"/>
  36. </NODE_LIST>
  37. </HOME>
  38. </HOME_LIST>
  39. <COMPOSITEHOME_LIST>
  40. </COMPOSITEHOME_LIST>
  41. </INVENTORY>
使用oui图形界面进行删除:
     
     
  1. [oracle@rac1 bin]$ pwd
  2. /u01/app/oracle/product/10.2.0/crs/oui/bin
  3. [oracle@rac1 bin]$ ./runInstaller
  4. StartingOracleUniversalInstaller...
  5. No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be executed.
  6. Preparing to launch OracleUniversalInstaller from /tmp/OraInstall2016-11-23_01-54-31AM.Please wait ...[oracle@rac1 bin]$ OracleUniversalInstaller,Version10.2.0.5.0Production
  7. Copyright(C)1999,2010,Oracle.All rights reserved.
选择deinstall,先删除crs_home:
再选择删除10G的ORACLE_HOME:
 最后就可以直接删除安装目录了:
      
      
  1. [root@rac1 u01]# cd app/oracle/product/
  2. [root@rac1 product]# ls -l
  3. total 8
  4. drwxrwx---3 oracle oinstall 4096Nov2302:1010.2.0
  5. drwxr-xr-x 3 root root 4096Nov2219:1911.2.0
  6. [root@rac1 product]# rm -rf 10.2.0
删除之后发现节点1的实例挂掉了,手工启动报错:
      
      
  1. SQL> startup
  2. ORA-27504: IPC error creating OSD context
查询MOS后解决,解决办法如下:
      
      
  1. [oracle@rac1 lib]$ pwd
  2. /u01/app/oracle/product/11.2.0/db_1/rdbms/lib
  3. [oracle@rac1 lib]$ make -f ins_rdbms.mk ipc_g
  4. rm -f /u01/app/oracle/product/11.2.0/db_1/lib/libskgxp11.so
  5. cp /u01/app/oracle/product/11.2.0/db_1/lib//libskgxpg.so /u01/app/oracle/product/11.2.0/db_1/lib/libskgxp11.so
之后成功启动:
      
      
  1. SQL> startup
  2. ORACLE instance started.
  3. TotalSystemGlobalArea622149632 bytes
  4. FixedSize2255792 bytes
  5. VariableSize243270736 bytes
  6. DatabaseBuffers373293056 bytes
  7. RedoBuffers3330048 bytes
  8. Database mounted.
  9. Database opened.
       
       
  1. [oracle@rac1 lib]$ crsctl stat res -t
  2. --------------------------------------------------------------------------------
  3. NAME TARGET STATE SERVER STATE_DETAILS
  4. --------------------------------------------------------------------------------
  5. LocalResources
  6. --------------------------------------------------------------------------------
  7. ora.DATA.dg
  8. ONLINE ONLINE rac1
  9. ONLINE ONLINE rac2
  10. ora.LISTENER.lsnr
  11. ONLINE ONLINE rac1
  12. ONLINE ONLINE rac2
  13. ora.OCRVOTE.dg
  14. ONLINE ONLINE rac1
  15. ONLINE ONLINE rac2
  16. ora.asm
  17. ONLINE ONLINE rac1 Started
  18. ONLINE ONLINE rac2 Started
  19. ora.gsd
  20. OFFLINE OFFLINE rac1
  21. OFFLINE OFFLINE rac2
  22. ora.net1.network
  23. ONLINE ONLINE rac1
  24. ONLINE ONLINE rac2
  25. ora.ons
  26. ONLINE ONLINE rac1
  27. ONLINE ONLINE rac2
  28. ora.registry.acfs
  29. ONLINE ONLINE rac1
  30. ONLINE ONLINE rac2
  31. --------------------------------------------------------------------------------
  32. ClusterResources
  33. --------------------------------------------------------------------------------
  34. ora.LISTENER_SCAN1.lsnr
  35. 1 ONLINE ONLINE rac1
  36. ora.cvu
  37. 1 ONLINE ONLINE rac1
  38. ora.oc4j
  39. 1 ONLINE ONLINE rac1
  40. ora.rac.db
  41. 1 ONLINE ONLINE rac1 Open
  42. 2 ONLINE ONLINE rac2 Open
  43. ora.rac1.vip
  44. 1 ONLINE ONLINE rac1
  45. ora.rac2.vip
  46. 1 ONLINE ONLINE rac2
  47. ora.scan1.vip
  48. 1 ONLINE ONLINE rac1
手工关闭服务器,再重新启动,Grid和数据库都可以自动的成功启动.到此升级结束
 
 
 
 csdn上传图片比较麻烦,这里我传了一个包含图片的word文档,如有需要可下载:
 
 http://download.csdn.net/detail/su377486/9693173
 
 
 
 
 
 































评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值