参数文档:
How to Clone an 11.2.0.3 Grid Infrastructure Home and Clusterware (文档 ID 1413846.1)
########## For Oracle 11g RAC Database ##########
10.19.246.22 tokf6a-vip
10.19.246.24 tokf6b-vip
172.19.246.22 tokf6a-priv
172.19.246.24 tokf6b-priv
10.19.246.28 tokf6-scan
1.tar 源端库的grid目录
2.scp 到目标端
3.移处不必要的文件
cd $GRID_HOME
注:把源hostname删掉
rm -rf host_name
rm -rf log/host_name
rm -rf gpnp/host_name
find gpnp -type f -exec rm -f {} \;
find cfgtoollogs -type f -exec rm -f {} \;
rm -rf crs/init/*
rm -rf cdata/*
rm -rf crf/*
rm -rf network/admin/*.ora
find . -name '*.ouibak' -exec rm {} \;
find . -name '*.ouibak.1' -exec rm {} \;
rm -rf root.sh*
4.mkdir -p /grid/app/oraInventory
chown grid:oinstall /grid/app/oraInventory
chown -R grid:oinstall /grid
5.chmod u+s /grid/product/11.2.0/gridhome_1/bin/oracle
chmod g+s /grid/product/11.2.0/gridhome_1/bin/oracle
chmod u+s /grid/product/11.2.0/gridhome_1/bin/extjob
chmod u+s /grid/product/11.2.0/gridhome_1/bin/jssu
chmod u+s /grid/product/11.2.0/gridhome_1/bin/oradism
cd /grid/product/11.2.0/gridhome_1/clone/bin
perl clone.pl ORACLE_HOME="/grid/product/11.2.0/gridhome_1" ORACLE_HOME_NAME=cltokf6 ORACLE_BASE=/grid/app INVENTORY_LOCATION=/grid/app/oraInventory OSDBA_GROUP=oinstall OSOPER_GROUP=oper INVENTORY_LOCATION=/grid/app/oraInventory -O'"CLUSTER_NODES={tokf6a,tokf6b}"' -O'"LOCAL_NODE="tokf6a' CRS=TRUE
[grid@tokf6a] /grid/product/11.2.0/gridhome_1/clone/bin> VENTORY_LOCATION=/grid/oraInventory -O'"CLUSTER_NODES={tokf6a,tokf6b}"' -O'"LOCAL_NODE="tokf6a' CRS=TRUE <
********************************************************************************
Your platform requires the root user to perform certain pre-clone
OS preparation. The root user should run the shell script 'rootpre.sh' before
you proceed with cloning. rootpre.sh can be found at
/grid/product/11.2.0/gridhome_1/clone directory.
Answer 'y' if the root user has run 'rootpre.sh' script.
********************************************************************************
Has 'rootpre.sh' been run by the root user? [y/n] (n)
y
./runInstaller -clone -waitForCompletion "ORACLE_HOME=/grid/product/11.2.0/gridhome_1" "ORACLE_HOME_NAME=cltokf6" "ORACLE_BASE=/oracle/app/oracle" "oracle_install_OSDBA=oinstall" "oracle_install_OSOPER=oper" "INVENTORY_LOCATION=/grid/oraInventory" "CLUSTER_NODES={tokf6a,tokf6b}" "LOCAL_NODE="tokf6a "CRS=TRUE" -silent -noConfig -nowait
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 32768 MB Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2015-03-12_11-23-58AM. Please wait ...Oracle Universal Installer, Version 11.2.0.4.0 Production
Copyright (C) 1999, 2013, Oracle. All rights reserved.
You can find the log of this install session at:
/oracle/app/oraInventory/logs/cloneActions2015-03-12_11-23-58AM.log
.................................................................................................... 100% Done.
Installation in progress (Thursday, March 12, 2015 11:24:40 AM GMT+08:00)
..................................................................... 69% Done.
Install successful
Linking in progress (Thursday, March 12, 2015 11:25:11 AM GMT+08:00)
Link successful
Setup in progress (Thursday, March 12, 2015 11:28:03 AM GMT+08:00)
................ 100% Done.
Setup successful
End of install phases.(Thursday, March 12, 2015 11:28:41 AM GMT+08:00)
WARNING:
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/grid/product/11.2.0/gridhome_1/root.sh #On nodes tokf6a
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node
Run the script on the local node first. After successful completion, you can run the script in parallel on all the other nodes.
The cloning of OraHome1Grid was successful.
Please check '/oracle/app/oraInventory/logs/cloneActions2015-03-12_11-23-58AM.log' for more details.
二节点上执行
cd /grid/product/11.2.0/gridhome_1/clone/bin
perl clone.pl ORACLE_HOME="/grid/product/11.2.0/gridhome_1" ORACLE_HOME_NAME=cltokf6 ORACLE_BASE=/grid/app INVENTORY_LOCATION=/grid/app/oraInventory OSDBA_GROUP=oinstall OSOPER_GROUP=oper -O'"CLUSTER_NODES={tokf6a,tokf6b}"' -O'"LOCAL_NODE="tokf6b' CRS=TRUE
********************************************************************************
Your platform requires the root user to perform certain pre-clone
OS preparation. The root user should run the shell script 'rootpre.sh' before
you proceed with cloning. rootpre.sh can be found at
/grid/product/11.2.0/gridhome_1/clone directory.
Answer 'y' if the root user has run 'rootpre.sh' script.
********************************************************************************
Has 'rootpre.sh' been run by the root user? [y/n] (n)
y
./runInstaller -clone -waitForCompletion "ORACLE_HOME=/grid/product/11.2.0/gridhome_1" "ORACLE_HOME_NAME=OraHome1Grid" "ORACLE_BASE=/oracle/app/oracle" "oracle_install_OSDBA=oinstall" "oracle_install_OSOPER=oper" "INVENTORY_LOCATION=/grid/oraInventory" "CLUSTER_NODES={tokf6a,tokf6b}" "LOCAL_NODE="tokf6a "CRS=TRUE" -silent -noConfig -nowait
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 32768 MB Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2015-03-12_11-15-19AM. Please wait ...Oracle Universal Installer, Version 11.2.0.4.0 Production
Copyright (C) 1999, 2013, Oracle. All rights reserved.
You can find the log of this install session at:
/oracle/app/oraInventory/logs/cloneActions2015-03-12_11-15-19AM.log
.................................................................................................... 100% Done.
Installation in progress (Thursday, March 12, 2015 11:15:55 AM GMT+08:00)
..................................................................... 69% Done.
Install successful
Linking in progress (Thursday, March 12, 2015 11:16:25 AM GMT+08:00)
Link successful
Setup in progress (Thursday, March 12, 2015 11:18:32 AM GMT+08:00)
................ 100% Done.
Setup successful
End of install phases.(Thursday, March 12, 2015 11:19:10 AM GMT+08:00)
WARNING:
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/grid/product/11.2.0/gridhome_1/root.sh #On nodes tokf6a
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node
Run the script on the local node first. After successful completion, you can run the script in parallel on all the other nodes.
The cloning of OraHome1Grid was successful.
Please check '/oracle/app/oraInventory/logs/cloneActions2015-03-12_11-15-19AM.log' for more details.
chown grid:oinstall /dev/rolv_ocr1
chown grid:oinstall /dev/rolv_ocr2
chown grid:oinstall /dev/rolv_vote1
chown grid:oinstall /dev/rolv_vote2
chown grid:oinstall /dev/rolv_vote3
克隆完后根据要求执行脚本
/grid/oraInventory/orainstRoot.sh
/grid/product/11.2.0/gridhome_1/root.sh
查看执行root.sh 脚本后提示的log文件根据需求执行如下脚本(本次是集群)
/grid/product/11.2.0/gridhome_1/crs/config/config.sh /*默认情况该文件对集群进行设置包含IP等,且默认调用图形界面*/
由于本次没用图形界面,因些编写响应文件config.rsp
==================================================================
#oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v11_2_0
INVENTORY_LOCATION=/grid/oraInventory
SELECTED_LANGUAGES=en
oracle.install.option=CRS_CONFIG
ORACLE_BASE=/grid
#oracle.install.asm.OSDBA=asmdba
#oracle.install.asm.OSOPER=oinstall
oracle.install.crs.config.gpnp.scanName=tokf6-scan
oracle.install.crs.config.gpnp.scanPort=1521
oracle.install.crs.config.clusterName=OraHome1Grid
oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.gpnp.gnsSubDomain=
oracle.install.crs.config.gpnp.gnsVIPAddress=
oracle.install.crs.config.autoConfigureClusterNodeVIP=true
oracle.install.crs.config.clusterNodes=tokf6a,tokf6b
#-------------------------------------------------------------------------------
# The value should be a comma separated strings where each string is as shown below
# InterfaceName:SubnetMask:InterfaceType
# where InterfaceType can be either "1", "2", or "3"
# (1 indicates public, 2 indicates private, and 3 indicates the interface is not used)
#
# For example: eth0:140.87.24.0:1,eth1:10.2.1.0:2,eth2:140.87.52.0:3
#
#-------------------------------------------------------------------------------
oracle.install.crs.config.networkInterfaceList=en8:10.19.246.0:1,en9:172.19.246.0:2
#oracle.install.crs.config.storageOption=ASM_STORAGE
#oracle.install.asm.SYSASMPassword=Oracle_11
#oracle.install.asm.diskGroup.name=DATA
#oracle.install.asm.diskGroup.redundancy=EXTERNAL
#oracle.install.asm.diskGroup.AUSize=8
#oracle.install.asm.diskGroup.disks=/dev/mapper/lun01,/dev/mapper/lun02
#oracle.install.asm.diskGroup.diskDiscoveryString=/dev/mapper/lun0*
#oracle.install.asm.monitorPassword=Oracle_11
#oracle.install.asm.upgradeASM=false
#[ConfigWizard]
#oracle.install.asm.useExistingDiskGroup=false
==================================================================
指定grid ocr、vote为裸设备
more /grid/product/11.2.0/gridhome_1/crs/install/crsconfig_params
# $Header: has/install/crsconfig/crsconfig_params.sbs /st_has_11.2.0/3 2011/03/21 22:55:23 ksviswan Exp $
#
# crsconfig.lib
#
# Copyright (c) 2000, 2011, Oracle and/or its affiliates. All rights reserved.
#
# NAME
# crsconfig_params.sbs - Installer variables required for root config
#
# DESCRIPTION
# crsconfig_param -
#
# MODIFIED (MM/DD/YY)
# ksviswan 03/08/11 - Backport ksviswan_febbugs2 from main
# ksviswan 02/03/11 - Backport ksviswan_janbugs4 from main
# dpham 05/20/10 - XbranchMerge dpham_bug-8609692 from st_has_11.2.0.1.0
# dpham 03/17/10 - Add TZ variable (9462081
# sujkumar 01/31/10 - CRF_HOME as ORACLE_HOME
# sujkumar 01/05/10 - Double quote args
# dpham 11/25/09 - Remove NETCFGJAR_NAME, EWTJAR_NAME, JEWTJAR_NAME,
# SHAREJAR_NAME, HELPJAR_NAME, and EMBASEJAR_NAME
# sukumar 11/04/09 - Fix CRFHOME. Add CRFHOME2 for Windows.
# anutripa 10/18/09 - Add CRFHOME for IPD/OS
# dpham 03/10/09 - Add ORACLE_BASE
# dpham 11/19/08 - Add ORA_ASM_GROUP
# khsingh 11/13/08 - revert ORA_ASM_GROUP for automated sh
# dpham 11/03/08 - Add ORA_ASM_GROUP
# ppallapo 09/22/08 - Add OCRID and CLUSTER_GUID
# dpham 09/10/08 - set OCFS_CONFIG to sl_diskDriveMappingList
# srisanka 05/13/08 - remove ORA_CRS_HOME, ORA_HA_HOME
# ysharoni 05/07/08 - NETWORKS fmt change s_networkList->s_finalIntrList
# srisanka 04/14/08 - ASM_UPGRADE param
# hkanchar 04/02/08 - Add OCR and OLRLOC for windows
# ysharoni 02/15/08 - bug 6817375
# ahabbas 02/28/08 - temporarily remove the need to instantiate the
# OCFS_CONFIG value
# srisanka 02/12/08 - add OCFS_CONFIG param
# srisanka 01/15/08 - separate generic and OSD params
# jachang 01/15/08 - Prepare ASM diskgroup parameter (commented out)
# ysharoni 12/27/07 - Static pars CSS_LEASEDURATION and ASM_SPFILE
# yizhang 12/10/07 - Add SCAN_NAME and SCAN_PORT
# ysharoni 12/14/07 - gpnp work, cont-d
# jachang 11/30/07 - Adding votedisk discovery string
# ysharoni 11/27/07 - Add GPnP params
# srisanka 10/18/07 - add params and combine crsconfig_defs.sh with this
# file
# khsingh 12/08/06 - add HA parameters
# khsingh 12/08/06 - add HA_HOME
# khsingh 11/25/06 - Creation
# ==========================================================
# Copyright (c) 2001, 2011, Oracle and/or its affiliates. All rights reserved.
#
# crsconfig_params.sbs -
#
# ==========================================================
SILENT=true
ORACLE_OWNER=grid
ORA_DBA_GROUP=oinstall
ORA_ASM_GROUP=asmadmin
LANGUAGE_ID=AMERICAN_AMERICA.WE8ISO8859P1
TZ=BEIST-8
ISROLLING=true
REUSEDG=false
ASM_AU_SIZE=1
USER_IGNORED_PREREQ=true
ORACLE_HOME=/grid/product/11.2.0/gridhome_1
ORACLE_BASE=/grid/app
OLD_CRS_HOME=
JREDIR=/grid/product/11.2.0/gridhome_1/jdk/jre/
JLIBDIR=/grid/product/11.2.0/gridhome_1/jlib
VNDR_CLUSTER=true
OCR_LOCATIONS=NO_VAL
CLUSTER_NAME=cltokf6
HOST_NAME_LIST=tokf6a,tokf6b
NODE_NAME_LIST=tokf6a,tokf6b
PRIVATE_NAME_LIST=
VOTING_DISKS=NO_VAL
#VF_DISCOVERY_STRING=%s_vfdiscoverystring%
ASM_UPGRADE=false
ASM_SPFILE=
ASM_DISK_GROUP=DATA
ASM_DISCOVERY_STRING=/dev/rolv*
ASM_DISKS=/dev/rolv_ocr2,/dev/rolv_vote1,/dev/rolv_ocr1,/dev/rolv_vote2,/dev/rolv_vote3
ASM_REDUNDANCY=NORMAL
CRS_STORAGE_OPTION=1
CSS_LEASEDURATION=400
CRS_NODEVIPS="tokf6a-vip/255.255.255.128/en8,tokf6b-vip/255.255.255.128/en8"
NODELIST=tokf6a,tokf6b
NETWORKS="en8"/10.19.246.0:public,"en9"/172.19.246.0:cluster_interconnect
SCAN_NAME=tokf6-scan
SCAN_PORT=1521
GPNP_PA=
OCFS_CONFIG=
# GNS consts
GNS_CONF=false
GNS_ADDR_LIST=
GNS_DOMAIN_LIST=
GNS_ALLOW_NET_LIST=
GNS_DENY_NET_LIST=
GNS_DENY_ITF_LIST=
#### Required by OUI add node
NEW_HOST_NAME_LIST=
NEW_NODE_NAME_LIST=
NEW_PRIVATE_NAME_LIST=
NEW_NODEVIPS="tokf6a-vip/255.255.255.128/en8,tokf6b-vip/255.255.255.128/en8"
############### OCR constants
# GPNPCONFIGDIR is handled differently in dev (T_HAS_WORK for all)
# GPNPGCONFIGDIR in dev expands to T_HAS_WORK_GLOBAL
GPNPCONFIGDIR=$ORACLE_HOME
GPNPGCONFIGDIR=$ORACLE_HOME
OCRLOC=
OLRLOC=
OCRID=
CLUSTER_GUID=
CLSCFG_MISSCOUNT=
#### IPD/OS
CRFHOME="/grid/product/11.2.0/gridhome_1"
########################################
## My Configuration for a cloned GI
## 必须为 2
CRS_STORAGE_OPTION=2
OCR_LOCATIONS=/dev/rolv_ocr1,/dev/rolv_ocr2
VOTING_DISKS=/dev/rolv_vote1,/dev/rolv_vote2,/dev/rolv_vote3
CLUSTER_NAME=cltokf6
HOST_NAME_LIST=tokf6a,tokf6b
NODE_NAME_LIST=tokf6a,tokf6b
NODELIST=tokf6a,tokf6b
PRIVATE_NAME_LIST=tokf6a-priv,tokf6b-priv
SCAN_NAME=tokf6-scan
SCAN_PORT=1521
NETWORKS="en8"/10.19.246.0:public,"en9"/172.19.246.0:cluster_interconnect
CRS_NODEVIPS="tokf6a-vip/255.255.255.128/en8,tokf6b-vip/255.255.255.128/en8"
########################################
[*KFQ_BOSSNG_root@tokf6a] /tmp> cp /etc/oraInst.loc /etc/oraInst.loc.bak
[*KFQ_BOSSNG_root@tokf6a] /tmp> >/etc/oraInst.loc
[*KFQ_BOSSNG_root@tokf6a] /tmp> mv /etc/oracle/ocr.loc /etc/oracle/ocr.loc.bak
[grid@tokf6a] /grid> tail -f /grid/product/11.2.0/gridhome_1/install/root_tokf6a_2015-03-14_04-14-08.log
Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /grid/product/11.2.0/gridhome_1/crs/install/crsconfig_params
User ignored Prerequisites during installation
Installing Trace File Analyzer
Adding Clusterware entries to inittab
CRS-2672: Attempting to start 'ora.mdnsd' on 'tokf6a'
CRS-2676: Start of 'ora.mdnsd' on 'tokf6a' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'tokf6a'
CRS-2676: Start of 'ora.gpnpd' on 'tokf6a' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'tokf6a'
CRS-2672: Attempting to start 'ora.gipcd' on 'tokf6a'
CRS-2676: Start of 'ora.cssdmonitor' on 'tokf6a' succeeded
CRS-2676: Start of 'ora.gipcd' on 'tokf6a' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'tokf6a'
CRS-2672: Attempting to start 'ora.diskmon' on 'tokf6a'
CRS-2676: Start of 'ora.diskmon' on 'tokf6a' succeeded
CRS-2676: Start of 'ora.cssd' on 'tokf6a' succeeded
PRKO-2190 : VIP exists for node tokf6a, VIP name tokf6a-vip
/grid/product/11.2.0/gridhome_1/bin/srvctl start nodeapps -n tokf6a ... failed
Failed to start Nodeapps at /grid/product/11.2.0/gridhome_1/crs/install/crsconfig_lib.pm line 9401.
/grid/product/11.2.0/gridhome_1/perl/bin/perl -I/grid/product/11.2.0/gridhome_1/perl/lib -I/grid/product/11.2.0/gridhome_1/crs/install /grid/product/11.2.0/gridhome_1/crs/install/rootcrs.pl execution failed
清理OCR配置
节点一
cd /grid/product/11.2.0/gridhome_1/crs/install
./rootcrs.pl -deconfig -force
最后一节点
cd /grid/product/11.2.0/gridhome_1/crs/install
rootcrs.pl -deconfig -force -lastnode
#############################################################下面是克隆DB软件#######################333333
参考文档:
Cloning An Existing Oracle11g Release 2 (11.2.0.x) RDBMS Installation Using OUI (文档 ID 1221705.1)
1.修改环境变量
2.tar -cvpf 11g_psu4_dbsoft_for_sokf6a.tar /oracle
3.copy到目标库解压 tar -xvf 11g_psu4_dbsoft_for_sokf6a.tar
4.chmod -R 755 /oracle
5.检查正在克隆的库否已经存在 a central inventory 里并存放在/oraInventory/ContentsXML/inventory.xml 如果存在这里面,
需要运行detach去进行分离;
6.进行克隆
cd $ORACLE_HOME/clone/bin
perl clone.pl ORACLE_HOME="/oracle/app/oracle/product/11.2.0/dbhome_1" ORACLE_HOME_NAME="ORA11G_DBHOME" ORACLE_BASE="/oracle/app/oracle" OSDBA_GROUP=oinstall OSOPER_GROUP=oper
********************************************************************************
Your platform requires the root user to perform certain pre-clone
OS preparation. The root user should run the shell script 'rootpre.sh' before
you proceed with cloning. rootpre.sh can be found at
/oracle/app/oracle/product/11.2.0/dbhome_1/clone directory.
Answer 'y' if the root user has run 'rootpre.sh' script.
********************************************************************************
Has 'rootpre.sh' been run by the root user? [y/n] (n)
y
./runInstaller -clone -waitForCompletion "ORACLE_HOME=/oracle/app/oracle/product/11.2.0/dbhome_1" "ORACLE_HOME_NAME=ORA11G_DBHOME" "ORACLE_BASE=/oracle/app/oracle" "oracle_install_OSDBA=oinstall" "oracle_install_OSOPER=oper" -silent -noConfig -nowait
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 32768 MB Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2015-03-15_03-05-56PM. Please wait ...Oracle Universal Installer, Version 11.2.0.4.0 Production
Copyright (C) 1999, 2013, Oracle. All rights reserved.
You can find the log of this install session at:
/grid/app/oraInventory/logs/cloneActions2015-03-15_03-05-56PM.log
.................................................................................................... 100% Done.
Installation in progress (Sunday, March 15, 2015 3:06:39 PM GMT+08:00)
.............................................................................. 78% Done.
Install successful
Linking in progress (Sunday, March 15, 2015 3:07:21 PM GMT+08:00)
Link successful
Setup in progress (Sunday, March 15, 2015 3:11:12 PM GMT+08:00)
Setup successful
End of install phases.(Sunday, March 15, 2015 3:11:53 PM GMT+08:00)
WARNING:
The following configuration scripts need to be executed as the "root" user.
/oracle/app/oracle/product/11.2.0/dbhome_1/root.sh
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts
7.删除ADR目录该目录在创建库时创建,但在克隆的时候不创建,因此需要单独去创建] Automatic Diagnostic Repository (ADR) directory structure in $ORACLE_BASE/diag
$ORACLE_HOME/bin/diagsetup basedir=/oracle/app/oracle oraclehome=/oracle/app/oracle/product/11.2.0/dbhome_1
8.检查是否RAC选项(如果没有任何返回说明RAC option没有link 如果返回kcsm.o则表时已经enable了RAC option)
ar -X32_64 -t $ORACLE_HOME/rdbms/lib/libknlopt.a|grep kcsm.o
如果返回为空,需要执行如下
cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk rac_on
make -f ins_rdbms.mk ioracle
9.创建audit目录
chown oracle:dba /oracle_log
*._cleanup_rollback_entries=400
*._clusterwide_global_transactions=FALSE
*._fix_control='6514189:OFF'
*._gby_hash_aggregation_enabled=FALSE
*._gc_policy_time=0
*._kghdsidx_count=3
*._ktb_debug_flags=8
*._log_deletion_policy='ALL'
*._memory_imm_mode_without_autosga=FALSE
*._optim_peek_user_binds=FALSE
*._optimizer_adaptive_cursor_sharing=FALSE
*._optimizer_extended_cursor_sharing_rel='NONE'
*._optimizer_extended_cursor_sharing='NONE'
*._optimizer_join_factorization=FALSE
*._optimizer_use_cbqt_star_transformation=FALSE
*._optimizer_use_feedback=FALSE
*._parallel_min_message_pool='4294967296'
*._projection_pushdown=false
*._PX_use_large_pool=TRUE
*._smu_debug_mode=0
*._undo_autotune=FALSE
*._use_adaptive_log_file_sync='FALSE'
*.audit_file_dest='/oracle_log/audit'
*.cluster_database_instances=2
*.cluster_database=TRUE
*.compatible='11.2.0.0.0'
*.control_files='/dev/rolv_ctl1','/dev/rolv_ctl2','/dev/rolv_ctl3'
*.db_block_size=8192
*.db_cache_size=12G
*.db_domain=''
*.db_files=2000
*.db_name='tokf6'
*.deferred_segment_creation=FALSE
*.event='28401 TRACE NAME CONTEXT FOREVER, LEVEL 1','10949 trace name context forever, level 1'
*.fast_start_mttr_target=300
*.gcs_server_processes=6
tokf6a.instance_name='tokf6a'
tokf6b.instance_name='tokf6a'
tokf6a.instance_number=1
tokf6b.instance_number=2
*.java_pool_size=2G
*.job_queue_processes=10
*.large_pool_size=2G
*.lock_sga=TRUE
tokf6a.log_archive_dest_1='LOCATION=/node1'
tokf6b.log_archive_dest_1='LOCATION=/node2'
tokf6a.log_archive_format='kf6a-%r-%t-%S.arc'
tokf6b.log_archive_format='kf6b-%r-%t-%S.arc'
*.memory_target=0
*.open_cursors=1000
*.parallel_force_local=TRUE
*.parallel_max_servers=32
*.parallel_min_servers=16
*.parallel_servers_target=0
*.pga_aggregate_target=8G
*.pre_page_sga=TRUE
*.processes=3000
*.query_rewrite_enabled='TRUE'
*.recovery_parallelism=8
*.recyclebin='OFF'
*.remote_listener='tokf6-scan:1529'
*.remote_login_passwordfile='exclusive'
*.resource_limit=TRUE
*.resource_manager_plan=''
*.session_cached_cursors=50
*.sga_target=0
*.shared_pool_size=4G
*.star_transformation_enabled='FALSE'
tokf6a.thread=1
tokf6b.thread=2
*.timed_statistics=TRUE
*.undo_management='AUTO'
*.undo_retention=10800
tokf6a.undo_tablespace='UNDOTBS1'
tokf6b.undo_tablespace='UNDOTBS2'
10.把数据库注册到CRS集群里
对于dbca创建的数据库,srvctl中包含了数据库和实例的信息。但是对于备份恢复的RAC数据库来说,srvctl中不包含数据库和实例信息。
srvctl管理器中没有database信息,很多地方都无法使用srvctl命令管理。所以,需要手动将database信息添加到srvctl管理器中。
srvctl add database -d tokf6 -o /oracle/app/oracle/product/11.2.0/dbhome_1
srvctl add instance -d tokf6 -i tokf6b -n tokf6b
srvctl add instance -d tokf6 -i tokf6a -n tokf6a
########## For Oracle 11g RAC Database ##########
10.19.246.22 tokf6a-vip
10.19.246.24 tokf6b-vip
172.19.246.22 tokf6a-priv
172.19.246.24 tokf6b-priv
10.19.246.28 tokf6-scan
1.tar 源端库的grid目录
2.scp 到目标端
3.移处不必要的文件
cd $GRID_HOME
注:把源hostname删掉
rm -rf host_name
rm -rf log/host_name
rm -rf gpnp/host_name
find gpnp -type f -exec rm -f {} \;
find cfgtoollogs -type f -exec rm -f {} \;
rm -rf crs/init/*
rm -rf cdata/*
rm -rf crf/*
rm -rf network/admin/*.ora
find . -name '*.ouibak' -exec rm {} \;
find . -name '*.ouibak.1' -exec rm {} \;
rm -rf root.sh*
4.mkdir -p /grid/app/oraInventory
chown grid:oinstall /grid/app/oraInventory
chown -R grid:oinstall /grid
5.chmod u+s /grid/product/11.2.0/gridhome_1/bin/oracle
chmod g+s /grid/product/11.2.0/gridhome_1/bin/oracle
chmod u+s /grid/product/11.2.0/gridhome_1/bin/extjob
chmod u+s /grid/product/11.2.0/gridhome_1/bin/jssu
chmod u+s /grid/product/11.2.0/gridhome_1/bin/oradism
cd /grid/product/11.2.0/gridhome_1/clone/bin
perl clone.pl ORACLE_HOME="/grid/product/11.2.0/gridhome_1" ORACLE_HOME_NAME=cltokf6 ORACLE_BASE=/grid/app INVENTORY_LOCATION=/grid/app/oraInventory OSDBA_GROUP=oinstall OSOPER_GROUP=oper INVENTORY_LOCATION=/grid/app/oraInventory -O'"CLUSTER_NODES={tokf6a,tokf6b}"' -O'"LOCAL_NODE="tokf6a' CRS=TRUE
[grid@tokf6a] /grid/product/11.2.0/gridhome_1/clone/bin> VENTORY_LOCATION=/grid/oraInventory -O'"CLUSTER_NODES={tokf6a,tokf6b}"' -O'"LOCAL_NODE="tokf6a' CRS=TRUE <
********************************************************************************
Your platform requires the root user to perform certain pre-clone
OS preparation. The root user should run the shell script 'rootpre.sh' before
you proceed with cloning. rootpre.sh can be found at
/grid/product/11.2.0/gridhome_1/clone directory.
Answer 'y' if the root user has run 'rootpre.sh' script.
********************************************************************************
Has 'rootpre.sh' been run by the root user? [y/n] (n)
y
./runInstaller -clone -waitForCompletion "ORACLE_HOME=/grid/product/11.2.0/gridhome_1" "ORACLE_HOME_NAME=cltokf6" "ORACLE_BASE=/oracle/app/oracle" "oracle_install_OSDBA=oinstall" "oracle_install_OSOPER=oper" "INVENTORY_LOCATION=/grid/oraInventory" "CLUSTER_NODES={tokf6a,tokf6b}" "LOCAL_NODE="tokf6a "CRS=TRUE" -silent -noConfig -nowait
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 32768 MB Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2015-03-12_11-23-58AM. Please wait ...Oracle Universal Installer, Version 11.2.0.4.0 Production
Copyright (C) 1999, 2013, Oracle. All rights reserved.
You can find the log of this install session at:
/oracle/app/oraInventory/logs/cloneActions2015-03-12_11-23-58AM.log
.................................................................................................... 100% Done.
Installation in progress (Thursday, March 12, 2015 11:24:40 AM GMT+08:00)
..................................................................... 69% Done.
Install successful
Linking in progress (Thursday, March 12, 2015 11:25:11 AM GMT+08:00)
Link successful
Setup in progress (Thursday, March 12, 2015 11:28:03 AM GMT+08:00)
................ 100% Done.
Setup successful
End of install phases.(Thursday, March 12, 2015 11:28:41 AM GMT+08:00)
WARNING:
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/grid/product/11.2.0/gridhome_1/root.sh #On nodes tokf6a
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node
Run the script on the local node first. After successful completion, you can run the script in parallel on all the other nodes.
The cloning of OraHome1Grid was successful.
Please check '/oracle/app/oraInventory/logs/cloneActions2015-03-12_11-23-58AM.log' for more details.
二节点上执行
cd /grid/product/11.2.0/gridhome_1/clone/bin
perl clone.pl ORACLE_HOME="/grid/product/11.2.0/gridhome_1" ORACLE_HOME_NAME=cltokf6 ORACLE_BASE=/grid/app INVENTORY_LOCATION=/grid/app/oraInventory OSDBA_GROUP=oinstall OSOPER_GROUP=oper -O'"CLUSTER_NODES={tokf6a,tokf6b}"' -O'"LOCAL_NODE="tokf6b' CRS=TRUE
********************************************************************************
Your platform requires the root user to perform certain pre-clone
OS preparation. The root user should run the shell script 'rootpre.sh' before
you proceed with cloning. rootpre.sh can be found at
/grid/product/11.2.0/gridhome_1/clone directory.
Answer 'y' if the root user has run 'rootpre.sh' script.
********************************************************************************
Has 'rootpre.sh' been run by the root user? [y/n] (n)
y
./runInstaller -clone -waitForCompletion "ORACLE_HOME=/grid/product/11.2.0/gridhome_1" "ORACLE_HOME_NAME=OraHome1Grid" "ORACLE_BASE=/oracle/app/oracle" "oracle_install_OSDBA=oinstall" "oracle_install_OSOPER=oper" "INVENTORY_LOCATION=/grid/oraInventory" "CLUSTER_NODES={tokf6a,tokf6b}" "LOCAL_NODE="tokf6a "CRS=TRUE" -silent -noConfig -nowait
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 32768 MB Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2015-03-12_11-15-19AM. Please wait ...Oracle Universal Installer, Version 11.2.0.4.0 Production
Copyright (C) 1999, 2013, Oracle. All rights reserved.
You can find the log of this install session at:
/oracle/app/oraInventory/logs/cloneActions2015-03-12_11-15-19AM.log
.................................................................................................... 100% Done.
Installation in progress (Thursday, March 12, 2015 11:15:55 AM GMT+08:00)
..................................................................... 69% Done.
Install successful
Linking in progress (Thursday, March 12, 2015 11:16:25 AM GMT+08:00)
Link successful
Setup in progress (Thursday, March 12, 2015 11:18:32 AM GMT+08:00)
................ 100% Done.
Setup successful
End of install phases.(Thursday, March 12, 2015 11:19:10 AM GMT+08:00)
WARNING:
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/grid/product/11.2.0/gridhome_1/root.sh #On nodes tokf6a
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node
Run the script on the local node first. After successful completion, you can run the script in parallel on all the other nodes.
The cloning of OraHome1Grid was successful.
Please check '/oracle/app/oraInventory/logs/cloneActions2015-03-12_11-15-19AM.log' for more details.
chown grid:oinstall /dev/rolv_ocr1
chown grid:oinstall /dev/rolv_ocr2
chown grid:oinstall /dev/rolv_vote1
chown grid:oinstall /dev/rolv_vote2
chown grid:oinstall /dev/rolv_vote3
克隆完后根据要求执行脚本
/grid/oraInventory/orainstRoot.sh
/grid/product/11.2.0/gridhome_1/root.sh
查看执行root.sh 脚本后提示的log文件根据需求执行如下脚本(本次是集群)
/grid/product/11.2.0/gridhome_1/crs/config/config.sh /*默认情况该文件对集群进行设置包含IP等,且默认调用图形界面*/
由于本次没用图形界面,因些编写响应文件config.rsp
==================================================================
#oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v11_2_0
INVENTORY_LOCATION=/grid/oraInventory
SELECTED_LANGUAGES=en
oracle.install.option=CRS_CONFIG
ORACLE_BASE=/grid
#oracle.install.asm.OSDBA=asmdba
#oracle.install.asm.OSOPER=oinstall
oracle.install.crs.config.gpnp.scanName=tokf6-scan
oracle.install.crs.config.gpnp.scanPort=1521
oracle.install.crs.config.clusterName=OraHome1Grid
oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.gpnp.gnsSubDomain=
oracle.install.crs.config.gpnp.gnsVIPAddress=
oracle.install.crs.config.autoConfigureClusterNodeVIP=true
oracle.install.crs.config.clusterNodes=tokf6a,tokf6b
#-------------------------------------------------------------------------------
# The value should be a comma separated strings where each string is as shown below
# InterfaceName:SubnetMask:InterfaceType
# where InterfaceType can be either "1", "2", or "3"
# (1 indicates public, 2 indicates private, and 3 indicates the interface is not used)
#
# For example: eth0:140.87.24.0:1,eth1:10.2.1.0:2,eth2:140.87.52.0:3
#
#-------------------------------------------------------------------------------
oracle.install.crs.config.networkInterfaceList=en8:10.19.246.0:1,en9:172.19.246.0:2
#oracle.install.crs.config.storageOption=ASM_STORAGE
#oracle.install.asm.SYSASMPassword=Oracle_11
#oracle.install.asm.diskGroup.name=DATA
#oracle.install.asm.diskGroup.redundancy=EXTERNAL
#oracle.install.asm.diskGroup.AUSize=8
#oracle.install.asm.diskGroup.disks=/dev/mapper/lun01,/dev/mapper/lun02
#oracle.install.asm.diskGroup.diskDiscoveryString=/dev/mapper/lun0*
#oracle.install.asm.monitorPassword=Oracle_11
#oracle.install.asm.upgradeASM=false
#[ConfigWizard]
#oracle.install.asm.useExistingDiskGroup=false
==================================================================
指定grid ocr、vote为裸设备
more /grid/product/11.2.0/gridhome_1/crs/install/crsconfig_params
# $Header: has/install/crsconfig/crsconfig_params.sbs /st_has_11.2.0/3 2011/03/21 22:55:23 ksviswan Exp $
#
# crsconfig.lib
#
# Copyright (c) 2000, 2011, Oracle and/or its affiliates. All rights reserved.
#
# NAME
# crsconfig_params.sbs - Installer variables required for root config
#
# DESCRIPTION
# crsconfig_param -
#
# MODIFIED (MM/DD/YY)
# ksviswan 03/08/11 - Backport ksviswan_febbugs2 from main
# ksviswan 02/03/11 - Backport ksviswan_janbugs4 from main
# dpham 05/20/10 - XbranchMerge dpham_bug-8609692 from st_has_11.2.0.1.0
# dpham 03/17/10 - Add TZ variable (9462081
# sujkumar 01/31/10 - CRF_HOME as ORACLE_HOME
# sujkumar 01/05/10 - Double quote args
# dpham 11/25/09 - Remove NETCFGJAR_NAME, EWTJAR_NAME, JEWTJAR_NAME,
# SHAREJAR_NAME, HELPJAR_NAME, and EMBASEJAR_NAME
# sukumar 11/04/09 - Fix CRFHOME. Add CRFHOME2 for Windows.
# anutripa 10/18/09 - Add CRFHOME for IPD/OS
# dpham 03/10/09 - Add ORACLE_BASE
# dpham 11/19/08 - Add ORA_ASM_GROUP
# khsingh 11/13/08 - revert ORA_ASM_GROUP for automated sh
# dpham 11/03/08 - Add ORA_ASM_GROUP
# ppallapo 09/22/08 - Add OCRID and CLUSTER_GUID
# dpham 09/10/08 - set OCFS_CONFIG to sl_diskDriveMappingList
# srisanka 05/13/08 - remove ORA_CRS_HOME, ORA_HA_HOME
# ysharoni 05/07/08 - NETWORKS fmt change s_networkList->s_finalIntrList
# srisanka 04/14/08 - ASM_UPGRADE param
# hkanchar 04/02/08 - Add OCR and OLRLOC for windows
# ysharoni 02/15/08 - bug 6817375
# ahabbas 02/28/08 - temporarily remove the need to instantiate the
# OCFS_CONFIG value
# srisanka 02/12/08 - add OCFS_CONFIG param
# srisanka 01/15/08 - separate generic and OSD params
# jachang 01/15/08 - Prepare ASM diskgroup parameter (commented out)
# ysharoni 12/27/07 - Static pars CSS_LEASEDURATION and ASM_SPFILE
# yizhang 12/10/07 - Add SCAN_NAME and SCAN_PORT
# ysharoni 12/14/07 - gpnp work, cont-d
# jachang 11/30/07 - Adding votedisk discovery string
# ysharoni 11/27/07 - Add GPnP params
# srisanka 10/18/07 - add params and combine crsconfig_defs.sh with this
# file
# khsingh 12/08/06 - add HA parameters
# khsingh 12/08/06 - add HA_HOME
# khsingh 11/25/06 - Creation
# ==========================================================
# Copyright (c) 2001, 2011, Oracle and/or its affiliates. All rights reserved.
#
# crsconfig_params.sbs -
#
# ==========================================================
SILENT=true
ORACLE_OWNER=grid
ORA_DBA_GROUP=oinstall
ORA_ASM_GROUP=asmadmin
LANGUAGE_ID=AMERICAN_AMERICA.WE8ISO8859P1
TZ=BEIST-8
ISROLLING=true
REUSEDG=false
ASM_AU_SIZE=1
USER_IGNORED_PREREQ=true
ORACLE_HOME=/grid/product/11.2.0/gridhome_1
ORACLE_BASE=/grid/app
OLD_CRS_HOME=
JREDIR=/grid/product/11.2.0/gridhome_1/jdk/jre/
JLIBDIR=/grid/product/11.2.0/gridhome_1/jlib
VNDR_CLUSTER=true
OCR_LOCATIONS=NO_VAL
CLUSTER_NAME=cltokf6
HOST_NAME_LIST=tokf6a,tokf6b
NODE_NAME_LIST=tokf6a,tokf6b
PRIVATE_NAME_LIST=
VOTING_DISKS=NO_VAL
#VF_DISCOVERY_STRING=%s_vfdiscoverystring%
ASM_UPGRADE=false
ASM_SPFILE=
ASM_DISK_GROUP=DATA
ASM_DISCOVERY_STRING=/dev/rolv*
ASM_DISKS=/dev/rolv_ocr2,/dev/rolv_vote1,/dev/rolv_ocr1,/dev/rolv_vote2,/dev/rolv_vote3
ASM_REDUNDANCY=NORMAL
CRS_STORAGE_OPTION=1
CSS_LEASEDURATION=400
CRS_NODEVIPS="tokf6a-vip/255.255.255.128/en8,tokf6b-vip/255.255.255.128/en8"
NODELIST=tokf6a,tokf6b
NETWORKS="en8"/10.19.246.0:public,"en9"/172.19.246.0:cluster_interconnect
SCAN_NAME=tokf6-scan
SCAN_PORT=1521
GPNP_PA=
OCFS_CONFIG=
# GNS consts
GNS_CONF=false
GNS_ADDR_LIST=
GNS_DOMAIN_LIST=
GNS_ALLOW_NET_LIST=
GNS_DENY_NET_LIST=
GNS_DENY_ITF_LIST=
#### Required by OUI add node
NEW_HOST_NAME_LIST=
NEW_NODE_NAME_LIST=
NEW_PRIVATE_NAME_LIST=
NEW_NODEVIPS="tokf6a-vip/255.255.255.128/en8,tokf6b-vip/255.255.255.128/en8"
############### OCR constants
# GPNPCONFIGDIR is handled differently in dev (T_HAS_WORK for all)
# GPNPGCONFIGDIR in dev expands to T_HAS_WORK_GLOBAL
GPNPCONFIGDIR=$ORACLE_HOME
GPNPGCONFIGDIR=$ORACLE_HOME
OCRLOC=
OLRLOC=
OCRID=
CLUSTER_GUID=
CLSCFG_MISSCOUNT=
#### IPD/OS
CRFHOME="/grid/product/11.2.0/gridhome_1"
########################################
## My Configuration for a cloned GI
## 必须为 2
CRS_STORAGE_OPTION=2
OCR_LOCATIONS=/dev/rolv_ocr1,/dev/rolv_ocr2
VOTING_DISKS=/dev/rolv_vote1,/dev/rolv_vote2,/dev/rolv_vote3
CLUSTER_NAME=cltokf6
HOST_NAME_LIST=tokf6a,tokf6b
NODE_NAME_LIST=tokf6a,tokf6b
NODELIST=tokf6a,tokf6b
PRIVATE_NAME_LIST=tokf6a-priv,tokf6b-priv
SCAN_NAME=tokf6-scan
SCAN_PORT=1521
NETWORKS="en8"/10.19.246.0:public,"en9"/172.19.246.0:cluster_interconnect
CRS_NODEVIPS="tokf6a-vip/255.255.255.128/en8,tokf6b-vip/255.255.255.128/en8"
########################################
[*KFQ_BOSSNG_root@tokf6a] /tmp> cp /etc/oraInst.loc /etc/oraInst.loc.bak
[*KFQ_BOSSNG_root@tokf6a] /tmp> >/etc/oraInst.loc
[*KFQ_BOSSNG_root@tokf6a] /tmp> mv /etc/oracle/ocr.loc /etc/oracle/ocr.loc.bak
[grid@tokf6a] /grid> tail -f /grid/product/11.2.0/gridhome_1/install/root_tokf6a_2015-03-14_04-14-08.log
Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /grid/product/11.2.0/gridhome_1/crs/install/crsconfig_params
User ignored Prerequisites during installation
Installing Trace File Analyzer
Adding Clusterware entries to inittab
CRS-2672: Attempting to start 'ora.mdnsd' on 'tokf6a'
CRS-2676: Start of 'ora.mdnsd' on 'tokf6a' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'tokf6a'
CRS-2676: Start of 'ora.gpnpd' on 'tokf6a' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'tokf6a'
CRS-2672: Attempting to start 'ora.gipcd' on 'tokf6a'
CRS-2676: Start of 'ora.cssdmonitor' on 'tokf6a' succeeded
CRS-2676: Start of 'ora.gipcd' on 'tokf6a' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'tokf6a'
CRS-2672: Attempting to start 'ora.diskmon' on 'tokf6a'
CRS-2676: Start of 'ora.diskmon' on 'tokf6a' succeeded
CRS-2676: Start of 'ora.cssd' on 'tokf6a' succeeded
PRKO-2190 : VIP exists for node tokf6a, VIP name tokf6a-vip
/grid/product/11.2.0/gridhome_1/bin/srvctl start nodeapps -n tokf6a ... failed
Failed to start Nodeapps at /grid/product/11.2.0/gridhome_1/crs/install/crsconfig_lib.pm line 9401.
/grid/product/11.2.0/gridhome_1/perl/bin/perl -I/grid/product/11.2.0/gridhome_1/perl/lib -I/grid/product/11.2.0/gridhome_1/crs/install /grid/product/11.2.0/gridhome_1/crs/install/rootcrs.pl execution failed
清理OCR配置
节点一
cd /grid/product/11.2.0/gridhome_1/crs/install
./rootcrs.pl -deconfig -force
最后一节点
cd /grid/product/11.2.0/gridhome_1/crs/install
rootcrs.pl -deconfig -force -lastnode
#############################################################下面是克隆DB软件#######################333333
参考文档:
Cloning An Existing Oracle11g Release 2 (11.2.0.x) RDBMS Installation Using OUI (文档 ID 1221705.1)
1.修改环境变量
2.tar -cvpf 11g_psu4_dbsoft_for_sokf6a.tar /oracle
3.copy到目标库解压 tar -xvf 11g_psu4_dbsoft_for_sokf6a.tar
4.chmod -R 755 /oracle
5.检查正在克隆的库否已经存在 a central inventory 里并存放在/oraInventory/ContentsXML/inventory.xml 如果存在这里面,
需要运行detach去进行分离;
6.进行克隆
cd $ORACLE_HOME/clone/bin
perl clone.pl ORACLE_HOME="/oracle/app/oracle/product/11.2.0/dbhome_1" ORACLE_HOME_NAME="ORA11G_DBHOME" ORACLE_BASE="/oracle/app/oracle" OSDBA_GROUP=oinstall OSOPER_GROUP=oper
********************************************************************************
Your platform requires the root user to perform certain pre-clone
OS preparation. The root user should run the shell script 'rootpre.sh' before
you proceed with cloning. rootpre.sh can be found at
/oracle/app/oracle/product/11.2.0/dbhome_1/clone directory.
Answer 'y' if the root user has run 'rootpre.sh' script.
********************************************************************************
Has 'rootpre.sh' been run by the root user? [y/n] (n)
y
./runInstaller -clone -waitForCompletion "ORACLE_HOME=/oracle/app/oracle/product/11.2.0/dbhome_1" "ORACLE_HOME_NAME=ORA11G_DBHOME" "ORACLE_BASE=/oracle/app/oracle" "oracle_install_OSDBA=oinstall" "oracle_install_OSOPER=oper" -silent -noConfig -nowait
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 32768 MB Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2015-03-15_03-05-56PM. Please wait ...Oracle Universal Installer, Version 11.2.0.4.0 Production
Copyright (C) 1999, 2013, Oracle. All rights reserved.
You can find the log of this install session at:
/grid/app/oraInventory/logs/cloneActions2015-03-15_03-05-56PM.log
.................................................................................................... 100% Done.
Installation in progress (Sunday, March 15, 2015 3:06:39 PM GMT+08:00)
.............................................................................. 78% Done.
Install successful
Linking in progress (Sunday, March 15, 2015 3:07:21 PM GMT+08:00)
Link successful
Setup in progress (Sunday, March 15, 2015 3:11:12 PM GMT+08:00)
Setup successful
End of install phases.(Sunday, March 15, 2015 3:11:53 PM GMT+08:00)
WARNING:
The following configuration scripts need to be executed as the "root" user.
/oracle/app/oracle/product/11.2.0/dbhome_1/root.sh
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts
7.删除ADR目录该目录在创建库时创建,但在克隆的时候不创建,因此需要单独去创建] Automatic Diagnostic Repository (ADR) directory structure in $ORACLE_BASE/diag
$ORACLE_HOME/bin/diagsetup basedir=/oracle/app/oracle oraclehome=/oracle/app/oracle/product/11.2.0/dbhome_1
8.检查是否RAC选项(如果没有任何返回说明RAC option没有link 如果返回kcsm.o则表时已经enable了RAC option)
ar -X32_64 -t $ORACLE_HOME/rdbms/lib/libknlopt.a|grep kcsm.o
如果返回为空,需要执行如下
cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk rac_on
make -f ins_rdbms.mk ioracle
9.创建audit目录
chown oracle:dba /oracle_log
*._cleanup_rollback_entries=400
*._clusterwide_global_transactions=FALSE
*._fix_control='6514189:OFF'
*._gby_hash_aggregation_enabled=FALSE
*._gc_policy_time=0
*._kghdsidx_count=3
*._ktb_debug_flags=8
*._log_deletion_policy='ALL'
*._memory_imm_mode_without_autosga=FALSE
*._optim_peek_user_binds=FALSE
*._optimizer_adaptive_cursor_sharing=FALSE
*._optimizer_extended_cursor_sharing_rel='NONE'
*._optimizer_extended_cursor_sharing='NONE'
*._optimizer_join_factorization=FALSE
*._optimizer_use_cbqt_star_transformation=FALSE
*._optimizer_use_feedback=FALSE
*._parallel_min_message_pool='4294967296'
*._projection_pushdown=false
*._PX_use_large_pool=TRUE
*._smu_debug_mode=0
*._undo_autotune=FALSE
*._use_adaptive_log_file_sync='FALSE'
*.audit_file_dest='/oracle_log/audit'
*.cluster_database_instances=2
*.cluster_database=TRUE
*.compatible='11.2.0.0.0'
*.control_files='/dev/rolv_ctl1','/dev/rolv_ctl2','/dev/rolv_ctl3'
*.db_block_size=8192
*.db_cache_size=12G
*.db_domain=''
*.db_files=2000
*.db_name='tokf6'
*.deferred_segment_creation=FALSE
*.event='28401 TRACE NAME CONTEXT FOREVER, LEVEL 1','10949 trace name context forever, level 1'
*.fast_start_mttr_target=300
*.gcs_server_processes=6
tokf6a.instance_name='tokf6a'
tokf6b.instance_name='tokf6a'
tokf6a.instance_number=1
tokf6b.instance_number=2
*.java_pool_size=2G
*.job_queue_processes=10
*.large_pool_size=2G
*.lock_sga=TRUE
tokf6a.log_archive_dest_1='LOCATION=/node1'
tokf6b.log_archive_dest_1='LOCATION=/node2'
tokf6a.log_archive_format='kf6a-%r-%t-%S.arc'
tokf6b.log_archive_format='kf6b-%r-%t-%S.arc'
*.memory_target=0
*.open_cursors=1000
*.parallel_force_local=TRUE
*.parallel_max_servers=32
*.parallel_min_servers=16
*.parallel_servers_target=0
*.pga_aggregate_target=8G
*.pre_page_sga=TRUE
*.processes=3000
*.query_rewrite_enabled='TRUE'
*.recovery_parallelism=8
*.recyclebin='OFF'
*.remote_listener='tokf6-scan:1529'
*.remote_login_passwordfile='exclusive'
*.resource_limit=TRUE
*.resource_manager_plan=''
*.session_cached_cursors=50
*.sga_target=0
*.shared_pool_size=4G
*.star_transformation_enabled='FALSE'
tokf6a.thread=1
tokf6b.thread=2
*.timed_statistics=TRUE
*.undo_management='AUTO'
*.undo_retention=10800
tokf6a.undo_tablespace='UNDOTBS1'
tokf6b.undo_tablespace='UNDOTBS2'
10.把数据库注册到CRS集群里
对于dbca创建的数据库,srvctl中包含了数据库和实例的信息。但是对于备份恢复的RAC数据库来说,srvctl中不包含数据库和实例信息。
srvctl管理器中没有database信息,很多地方都无法使用srvctl命令管理。所以,需要手动将database信息添加到srvctl管理器中。
srvctl add database -d tokf6 -o /oracle/app/oracle/product/11.2.0/dbhome_1
srvctl add instance -d tokf6 -i tokf6b -n tokf6b
srvctl add instance -d tokf6 -i tokf6a -n tokf6a
来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/29446986/viewspace-1561631/,如需转载,请注明出处,否则将追究法律责任。
转载于:http://blog.itpub.net/29446986/viewspace-1561631/