OS环境
[root@orcl ~]# lsb_release -a LSB Version: :core-3.1-ia32:core-3.1-noarch:graphics-3.1-ia32:graphics-3.1-noarch Distributor ID: EnterpriseEnterpriseServer Description: Enterprise Linux Enterprise Linux Server release 5.4 (Carthage) Release: 5.4 Codename: Carthage
[root@orcl ~]# uname -rm
2.6.18-164.el5 i68
准备
1、50G磁盘用作ASM盘
2、ASMlib管理软件
[root@orcl 20110104_175230]# ls -lrth total 225K -r-xr-xr-x 1 root root 84K Jan 4 2011 oracleasm-support-2.1.3-1.el5.i386.rpm -r-xr-xr-x 1 root root 14K Jan 4 2011 oracleasmlib-2.0.4-1.el5.i386.rpm -r-xr-xr-x 1 root root 128K Jan 4 2011 oracleasm-2.6.18-164.el5-2.0.5-1.el5.i686.rpm
3、GI和DB安装软件
[root@orcl 11.2.0]# ls -lrth total 6.0K dr-xr-xr-x 1 root root 2.0K Oct 22 2011 deinstall dr-xr-xr-x 1 root root 2.0K Oct 22 2011 database dr-xr-xr-x 1 root root 2.0K Oct 22 2011 clusterware
开始安装
1、磁盘分区
[root@orcl ~]# fdisk -l //查看分区表 Disk /dev/sda: 53.6 GB, 53687091200 bytes 255 heads, 63 sectors/track, 6527 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 6527 52323705 8e Linux LVM Disk /dev/sdb: 53.6 GB, 53687091200 bytes 255 heads, 63 sectors/track, 6527 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk /dev/sdb doesn't contain a valid partition table
[root@orcl ~]# fdisk /dev/sdb //给挂载盘分区
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
The number of cylinders for this disk is set to 6527.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): n //添加一个新分区
Command action
e extended
p primary partition (1-4)
p //添加主分区
Partition number (1-4): 1 //分区编号
First cylinder (1-6527, default 1): //用默认的1
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-6527, default 6527): +2048M//分区大小添加2G
.
.
.
使用同样的方式创建主分区2、3,剩余的大小全部划到扩展分区
Command (m for help): n
Command action
e extended
p primary partition (1-4)
e //创建扩展分区
Selected partition 4
First cylinder (751-6527, default 751):
Using default value 751
Last cylinder or +size or +sizeM or +sizeK (751-6527, default 6527):
Using default value 6527
划分扩展分区
Command (m for help): n
First cylinder (751-6527, default 751):
Using default value 751
Last cylinder or +size or +sizeM or +sizeK (751-6527, default 6527): +2048M
.
.
.
Command (m for help): n
First cylinder (3251-6527, default 3251):
Using default value 3251
Last cylinder or +size or +sizeM or +sizeK (3251-6527, default 6527): +2048M
Command (m for help): n
The maximum number of partitions has been created
Command (m for help): p //打印分区表信息
Disk /dev/sdb: 53.6 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 250 2008093+ 83 Linux
/dev/sdb2 251 500 2008125 83 Linux
/dev/sdb3 501 750 2008125 83 Linux
/dev/sdb4 751 6527 46403752+ 5 Extended
/dev/sdb5 751 1000 2008093+ 83 Linux
/dev/sdb6 1001 1250 2008093+ 83 Linux
/dev/sdb7 1251 1500 2008093+ 83 Linux
/dev/sdb8 1501 1750 2008093+ 83 Linux
/dev/sdb9 1751 2000 2008093+ 83 Linux
/dev/sdb10 2001 2250 2008093+ 83 Linux
/dev/sdb11 2251 2500 2008093+ 83 Linux
/dev/sdb12 2501 2750 2008093+ 83 Linux
/dev/sdb13 2751 3000 2008093+ 83 Linux
/dev/sdb14 3001 3250 2008093+ 83 Linux
/dev/sdb15 3251 3500 2008093+ 83 Linux
Command (m for help): w //将分区表信息写入磁盘
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
[root@orcl ~]# partprobe //告知系统分区表信息改变
2、创建用户和用户组
[root@orcl ~]# groupadd oinstall [root@orcl ~]# groupadd dba [root@orcl ~]# groupadd oper [root@orcl ~]# groupadd asmadmin [root@orcl ~]# groupadd asmdba [root@orcl ~]# groupadd asmoper [root@orcl ~]# [root@orcl ~]# useradd -g oinstall -G asmadmin,asmdba,asmoper,dba grid [root@orcl ~]# useradd -g oinstall -G dba,oper,asmdba oracle
[root@orcl ~]# [root@orcl ~]# passwd grid Changing password for user grid. New UNIX password: BAD PASSWORD: it is based on a dictionary word Retype new UNIX password: passwd: all authentication tokens updated successfully. [root@orcl ~]# passwd oracle Changing password for user oracle. New UNIX password: BAD PASSWORD: it is based on a dictionary word Retype new UNIX password: passwd: all authentication tokens updated successfully.
3、GI软件和DB软件目录
[root@orcl ~]# mkdir -p /u01/app/grid [root@orcl ~]# mkdir -p /u01/app/oracle [root@orcl ~]# chown -R grid.oinstall /u01 [root@orcl ~]# chown -R oracle.oinstall /u01/app/oracle [root@orcl ~]# chmod -R 775 /u01
4、分别配置grid和oracle用户的环境变量
[root@orcl ~]# su - grid [grid@orcl ~]$ vi .bash_profile # .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/bin export PATH umask 022 export TMP=/tmp export TMPDIR=/tmp export ORACLE_BASE=/u01/app/grid export ORACLE_HOME=/u01/app/grid/product/11.2.0/gridhome_1 export ORACLE_SID=+ASM export PATH=$ORACLE_HOME/bin:$PATH ~ ~ ".bash_profile" 20L, 372C written [grid@orcl ~]$ cat .bash_profile # .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/bin export PATH umask 022 export TMP=/tmp export TMPDIR=/tmp export ORACLE_BASE=/u01/app/grid export ORACLE_HOME=/u01/app/grid/product/11.2.0/gridhome_1 export ORACLE_SID=+ASM export PATH=$ORACLE_HOME/bin:$PATH [grid@orcl ~]$ logout [root@orcl ~]# su - oracle [oracle@orcl ~]$ vi .bash_profile # .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/bin export PATH umask 022 export TMP=/tmp export TMPDIR=/tmp export ORACLE_BASE=/u01/app/oracle export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 export ORACLE_SID=orcl export PATH=$ORACLE_HOME/bin:$PATH ~ ~ ".bash_profile" 20L, 374C written [oracle@orcl ~]$ cat .bash_profile # .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/bin export PATH umask 022 export TMP=/tmp export TMPDIR=/tmp export ORACLE_BASE=/u01/app/oracle export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 export ORACLE_SID=orcl export PATH=$ORACLE_HOME/bin:$PATH
[oracle@orcl ~]$ logout
5、安装配置ASM管理软件
[root@orcl 20110104_175230]# ls -lrth total 225K -r-xr-xr-x 1 root root 84K Jan 4 2011 oracleasm-support-2.1.3-1.el5.i386.rpm -r-xr-xr-x 1 root root 14K Jan 4 2011 oracleasmlib-2.0.4-1.el5.i386.rpm -r-xr-xr-x 1 root root 128K Jan 4 2011 oracleasm-2.6.18-164.el5-2.0.5-1.el5.i686.rpm [root@orcl 20110104_175230]# rpm -ivh * warning: oracleasm-2.6.18-164.el5-2.0.5-1.el5.i686.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159 Preparing... ########################################### [100%] 1:oracleasm-support ########################################### [ 33%] 2:oracleasm-2.6.18-164.el########################################### [ 67%] 3:oracleasmlib ########################################### [100%] [root@orcl 20110104_175230]# rpm -qa | grep oracleasm oracleasm-support-2.1.3-1.el5 oracleasm-2.6.18-164.el5-2.0.5-1.el5 oracleasmlib-2.0.4-1.el5
[root@orcl ~]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]
6、创建ASM盘
[root@orcl ~]# /etc/init.d/oracleasm help Usage: /etc/init.d/oracleasm {start|stop|restart|enable|disable|configure|createdisk|deletedisk|querydisk|listdisks|scandisks|status} [root@orcl ~]# /etc/init.d/oracleasm createdisk ASMDISK01 /dev/sdb1 Marking disk "ASMDISK01" as an ASM disk: [ OK ] [root@orcl ~]# /etc/init.d/oracleasm createdisk ASMDISK02 /dev/sdb2 Marking disk "ASMDISK02" as an ASM disk: [ OK ] [root@orcl ~]# /etc/init.d/oracleasm createdisk ASMDISK03 /dev/sdb3 Marking disk "ASMDISK03" as an ASM disk: [ OK ] [root@orcl ~]# /etc/init.d/oracleasm createdisk ASMDISK04 /dev/sdb5 // /dev/sdb4不可用 Marking disk "ASMDISK04" as an ASM disk: [ OK ] [root@orcl ~]# /etc/init.d/oracleasm createdisk ASMDISK05 /dev/sdb6 Marking disk "ASMDISK05" as an ASM disk: [ OK ] [root@orcl ~]# /etc/init.d/oracleasm createdisk ASMDISK06 /dev/sdb7 Marking disk "ASMDISK06" as an ASM disk: [ OK ] [root@orcl ~]# /etc/init.d/oracleasm createdisk ASMDISK07 /dev/sdb8 Marking disk "ASMDISK07" as an ASM disk: [ OK ] [root@orcl ~]# /etc/init.d/oracleasm createdisk ASMDISK08 /dev/sdb9 Marking disk "ASMDISK08" as an ASM disk: [ OK ] [root@orcl ~]# /etc/init.d/oracleasm createdisk ASMDISK09 /dev/sdb10 Marking disk "ASMDISK09" as an ASM disk: [ OK ] [root@orcl ~]# /etc/init.d/oracleasm createdisk ASMDISK10 /dev/sdb11 Marking disk "ASMDISK10" as an ASM disk: [ OK ] [root@orcl ~]# /etc/init.d/oracleasm createdisk ASMDISK11 /dev/sdb12 Marking disk "ASMDISK11" as an ASM disk: [ OK ] [root@orcl ~]# /etc/init.d/oracleasm createdisk ASMDISK12 /dev/sdb13 Marking disk "ASMDISK12" as an ASM disk: [ OK ] [root@orcl ~]# /etc/init.d/oracleasm createdisk ASMDISK13 /dev/sdb14 Marking disk "ASMDISK13" as an ASM disk: [ OK ] [root@orcl ~]# /etc/init.d/oracleasm createdisk ASMDISK14 /dev/sdb15 Marking disk "ASMDISK14" as an ASM disk: [ OK ] [root@orcl ~]# /etc/init.d/oracleasm listdisks ASMDISK01 ASMDISK02 ASMDISK03 ASMDISK04 ASMDISK05 ASMDISK06 ASMDISK07 ASMDISK08 ASMDISK09 ASMDISK10 ASMDISK11 ASMDISK12 ASMDISK13 ASMDISK14
7、安装GI软件(分辨率1024)
[root@orcl tmp]# xhost + access control disabled, clients can connect from any host [root@orcl tmp]# su - grid [grid@orcl ~]$ cd /tmp/stage/11.2.0/clusterware/Disk1/ [grid@orcl Disk1]$ ls -lrth total 44K dr-xr-xr-x 8 root root 4.0K May 9 23:06 doc dr-xr-xr-x 2 root root 4.0K May 9 23:06 sshsetup -r-xr-xr-x 1 root root 4.3K May 9 23:06 runInstaller -r-xr-xr-x 1 root root 3.8K May 9 23:06 runcluvfy.sh dr-xr-xr-x 2 root root 4.0K May 9 23:06 rpm dr-xr-xr-x 2 root root 4.0K May 9 23:06 response dr-xr-xr-x 4 root root 4.0K May 9 23:06 install -r-xr-xr-x 1 root root 4.2K May 9 23:07 welcome.html dr-xr-xr-x 14 root root 4.0K May 9 23:07 stage [grid@orcl Disk1]$ export DISPLAY=192.168.159.1:0.0 [grid@orcl Disk1]$ ./runInstaller Starting Oracle Universal Installer... Checking Temp space: must be greater than 80 MB. Actual 36867 MB Passed Checking swap space: must be greater than 150 MB. Actual 4031 MB Passed Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed Preparing to launch Oracle Universal Installer from /tmp/OraInstall2017-05-09_11-24-30PM. Please wait ...
[grid@orcl Disk1]$ You can find the log of this install session at:
/u01/app/oraInventory/logs/installActions2017-05-09_11-24-30PM.log
1/11 GI for standalone
2/11 选择语言
3/11 创建ASM Disk Group,选4个盘,冗余方式选择normal
4/11 设置ASM密码
5/11 系统组赋权
6/11 选择安装路径
7/11 创建安装文件存放目录
8/11 先决条件检查
Fixable为Yes的选项可通过CVN修复(主要是最大打开文件数和系统内核参数的配置),点击Fix & Check Again 弹出下面的窗口
以root用户运行窗口中提示的脚本解决fixable选项,
缺少的安装包在OS光盘中可以找到,安装完成后点击OK会重新检查先决条件,通过后进入下一步。
[root@orcl CVU_11.2.0.1.0_grid]# pwd /tmp/CVU_11.2.0.1.0_grid [root@orcl CVU_11.2.0.1.0_grid]# ls -lrth total 1.9M -r-xr-xr-x 1 grid oinstall 975 May 9 23:07 runfixup.sh -r-xr-xr-x 1 grid oinstall 60K May 9 23:07 orarun.sh -r-xr-xr-x 1 grid oinstall 223 May 9 23:07 exectask.sh -r-xr-xr-x 1 grid oinstall 1.8M May 9 23:07 exectask -r-xr-xr-x 1 grid oinstall 7.7K May 9 23:07 cvuqdisk-1.0.7-1.rpm drwxr-xr-x 3 grid oinstall 4.0K May 9 23:35 fixup -rw-r--r-- 1 grid oinstall 253 May 9 23:35 fixup.response -rw-r--r-- 1 grid oinstall 53 May 9 23:35 fixup.enable drwxrwxrwx 2 grid oinstall 4.0K May 9 23:35 scratch [root@orcl CVU_11.2.0.1.0_grid]# ./runfixup.sh Response file being used is :./fixup.response Enable file being used is :./fixup.enable Log file location: ./orarun.log Setting Kernel Parameters... kernel.sem = 250 32000 100 128 fs.file-max = 6815744 net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default = 262144 net.core.wmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_max = 1048576 fs.aio-max-nr = 1048576 uid=500(grid) gid=500(oinstall) groups=500(oinstall),501(dba),503(asmadmin),504(asmdba),505(asmoper) [root@orcl Server]# pwd /media/Enterprise Linux dvd 20090908/Server [root@orcl Server]# rpm -ivh libaio-devel-0.3.106-3.2.i386.rpm sysstat-7.0.2-3.el5.i386.rpm unixODBC-2.2.11-7.1.i386.rpm unixODBC-devel-2.2.11-7.1.i386.rpm warning: libaio-devel-0.3.106-3.2.i386.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159 Preparing... ########################################### [100%] 1:libaio-devel ########################################### [ 25%] 2:unixODBC ########################################### [ 50%] 3:sysstat ########################################### [ 75%] 4:unixODBC-devel ########################################### [100%]
9/11 安装摘要
10/11 progress
安装过程中需以root用户运行弹出窗口中的脚本
[root@orcl Server]# /u01/app/oraInventory/orainstRoot.sh //运行第一个脚本 Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete. [root@orcl Server]# /u01/app/grid/product/11.2.0/gridhome_1/root.sh //运行第二个脚本 Running Oracle 11g root.sh script... The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/grid/product/11.2.0/gridhome_1 Enter the full pathname of the local bin directory: [/usr/local/bin]: Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root.sh script. Now product-specific root actions will be performed. 2017-05-10 00:21:31: Checking for super user privileges 2017-05-10 00:21:31: User has super user privileges 2017-05-10 00:21:31: Parsing the host name Using configuration parameter file: /u01/app/grid/product/11.2.0/gridhome_1/crs/install/crsconfig_params Creating trace directory LOCAL ADD MODE Creating OCR keys for user 'grid', privgrp 'oinstall'.. Operation successful. CRS-4664: Node orcl successfully pinned. Adding daemon to inittab CRS-4123: Oracle High Availability Services has been started. ohasd is starting orcl 2017/05/10 00:22:19 /u01/app/grid/product/11.2.0/gridhome_1/cdata/orcl/backup_20170510_002219.olr Successfully configured Oracle Grid Infrastructure for a Standalone Server Updating inventory properties for clusterware Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 4031 MB Passed Error: OUI cannot be launched because the current working directory is set on the CD-ROM mount point.
Launching OUI from this directory will make it difficult to unmount the disk later in the installation.
Please change the working directory and relaunch OUI.
You can change the working directory by typing 'cd' (e.g. cd /home)
and then execute the 'runInstaller' command by typing its full path (e.g. /mnt/cdrom/runInstaller) //运行脚本失败,大概的意思是因为在光盘目录下运行脚本,无法启动OUI [root@orcl ~]# /u01/app/grid/product/11.2.0/gridhome_1/crs/install/roothas.pl --delete -force -verbose //删除crs配置 2017-05-10 00:27:11: Checking for super user privileges 2017-05-10 00:27:11: User has super user privileges 2017-05-10 00:27:11: Parsing the host name Using configuration parameter file: /u01/app/grid/product/11.2.0/gridhome_1/crs/install/crsconfig_params CRS-2500: Cannot stop resource 'ora.cssd' as it is not running CRS-4000: Command Stop failed, or completed with errors. CRS-4133: Oracle High Availability Services has been stopped. ACFS-9200: Supported Successfully deconfigured Oracle Restart stack [root@orcl ~]# /u01/app/grid/product/11.2.0/gridhome_1/root.sh //切换目录重新运行脚本 Running Oracle 11g root.sh script... The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/grid/product/11.2.0/gridhome_1 Enter the full pathname of the local bin directory: [/usr/local/bin]: The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root.sh script. Now product-specific root actions will be performed. 2017-05-10 00:27:58: Checking for super user privileges 2017-05-10 00:27:58: User has super user privileges 2017-05-10 00:27:58: Parsing the host name Using configuration parameter file: /u01/app/grid/product/11.2.0/gridhome_1/crs/install/crsconfig_params LOCAL ADD MODE Creating OCR keys for user 'grid', privgrp 'oinstall'.. Operation successful. CRS-4664: Node orcl successfully pinned. Adding daemon to inittab CRS-4123: Oracle High Availability Services has been started. ohasd is starting orcl 2017/05/10 00:28:18 /u01/app/grid/product/11.2.0/gridhome_1/cdata/orcl/backup_20170510_002818.olr Successfully configured Oracle Grid Infrastructure for a Standalone Server Updating inventory properties for clusterware Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 4031 MB Passed The inventory pointer is located at /etc/oraInst.loc The inventory is located at /u01/app/oraInventory 'UpdateNodeList' was successful. // 成功
11/11 GI安装完成
8、安装DB软件
[root@orcl ~]# xhost + access control disabled, clients can connect from any host [root@orcl ~]# su - oracle [oracle@orcl ~]$ cd /tmp/stage/11.2.0/database/Disk1/ [oracle@orcl Disk1]$ ls -lrth total 40K dr-xr-xr-x 2 root root 4.0K May 9 23:08 sshsetup -r-xr-xr-x 1 root root 4.3K May 9 23:08 runInstaller dr-xr-xr-x 2 root root 4.0K May 9 23:08 rpm dr-xr-xr-x 2 root root 4.0K May 9 23:08 response dr-xr-xr-x 4 root root 4.0K May 9 23:08 install dr-xr-xr-x 10 root root 4.0K May 9 23:08 doc -r-xr-xr-x 1 root root 4.9K May 9 23:11 welcome.html dr-xr-xr-x 14 root root 4.0K May 9 23:11 stage [oracle@orcl Disk1]$ export DISPLAY=192.168.159.1:0.0 [oracle@orcl Disk1]$ ./runInstaller Starting Oracle Universal Installer... Checking Temp space: must be greater than 80 MB. Actual 34000 MB Passed Checking swap space: must be greater than 150 MB. Actual 4031 MB Passed Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed Preparing to launch Oracle Universal Installer from /tmp/OraInstall2017-05-10_01-07-54AM. Please wait ...
[oracle@orcl Disk1]$ You can find the log of this install session at:
/u01/app/oraInventory/logs/installActions2017-05-10_01-07-54AM.log
1/11 安全更新账号填写 忽略
2/11 仅安装数据库软件
3/11 单实例
4/11 语言选择
5/11 选择企业版
6/11 安装路径选择
7/11 操作系统组赋权
8/11 先决条件检查
Fixable选项使用CVU修复
以root用户运行弹框中的脚本
[root@orcl CVU_11.2.0.1.0_oracle]# pwd /tmp/CVU_11.2.0.1.0_oracle [root@orcl CVU_11.2.0.1.0_oracle]# ls -lrth total 1.9M -r-xr-xr-x 1 oracle oinstall 7.7K May 9 23:11 cvuqdisk-1.0.7-1.rpm -r-xr-xr-x 1 oracle oinstall 975 May 9 23:11 runfixup.sh -r-xr-xr-x 1 oracle oinstall 60K May 9 23:11 orarun.sh -r-xr-xr-x 1 oracle oinstall 223 May 9 23:11 exectask.sh -r-xr-xr-x 1 oracle oinstall 1.8M May 9 23:11 exectask -rw-r--r-- 1 oracle oinstall 54 May 10 01:15 fixup.response -rw-r--r-- 1 oracle oinstall 24 May 10 01:15 fixup.enable drwxr-xr-x 3 oracle oinstall 4.0K May 10 01:15 fixup drwxrwxrwx 2 oracle oinstall 4.0K May 10 01:15 scratch [root@orcl CVU_11.2.0.1.0_oracle]# ./runfixup.sh Response file being used is :./fixup.response Enable file being used is :./fixup.enable Log file location: ./orarun.log uid=501(oracle) gid=500(oinstall) groups=500(oinstall),501(dba),502(oper),504(asmdba)
9/11 摘要
10/11 Progress
以root用户运行弹框中提示的脚本
[root@orcl CVU_11.2.0.1.0_oracle]# /u01/app/oracle/product/11.2.0/dbhome_1/root.sh Running Oracle 11g root.sh script... The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/oracle/product/11.2.0/dbhome_1 Enter the full pathname of the local bin directory: [/usr/local/bin]: The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root.sh script. Now product-specific root actions will be performed. Finished product-specific root actions.
11/11 完成
9、创建FRA Disk Group
[root@orcl ~]# xhost + access control disabled, clients can connect from any host [root@orcl ~]# su - grid [grid@orcl ~]$ export DISPLAY=192.168.159.1:0.0 [grid@orcl ~]$ asmca
创建FRA Disk Group,冗余方式选择External(None),也选择4个磁盘
创建成功
10、创建数据库
welcome
1/12 创建数据库
2/12 一般用途处理事务
3/12 设置global database name 和 SID
4/12 开启OEM
5/12 设置管理员密码
6/12 使用asm存储类型并选择数据文件使用DATA磁盘组
访问ASM鉴权,安装GI时设置的密码
7/12 开启闪回,选择FRA磁盘组
8/12 使用样例schema,后面做实验用
9/11 修改字符集为AL32UTF8
10/11 确认存储数据
11/11 创建选项,默认
12/12 Progress
Finish
11、完成 (虚拟机安装实验后最好做个快照 :))
[root@orcl ~]# su - grid [grid@orcl ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.1.0 Production on Wed May 10 03:16:28 2017 Copyright (c) 1982, 2009, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Automatic Storage Management option SQL> select name,state from v$asm_diskgroup; NAME STATE ------------------------------ ----------- DATA MOUNTED FRA MOUNTED SQL> exit Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Automatic Storage Management option [grid@orcl ~]$ logout [root@orcl ~]# su - oracle [oracle@orcl ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.1.0 Production on Wed May 10 03:17:12 2017 Copyright (c) 1982, 2009, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options SQL> select * from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production PL/SQL Release 11.2.0.1.0 - Production CORE 11.2.0.1.0 Production TNS for Linux: Version 11.2.0.1.0 - Production NLSRTL Version 11.2.0.1.0 - Production SQL> exit Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options