11gR2 RAC在AIX7.1安装升级问题汇总

问题一、安装前hosts配置检测失败,报错信息如下:

Checking hosts config file...

  Node Name     Status                    Comment                

  ------------  ------------------------  ------------------------

  bildb1rac2    failed                    Invalid Entry          

  bildb1rac1    failed                    Invalid Entry 

ERROR:

PRVF-4190 : Verification of the hosts config file failed

[INS-41112]Specified network interface doesnt maitain connectivity across cluster nodes

解决方法:

配置/etc/hosts问题,删除所有空行和tab键,将ipv6配置注释。

1. Comment the following line:

From:

::1

To:

# ::1

2. Remove any blank lines and tabs. Either to comment the blank line and to put space instead of tabs.

问题二、安装前检测,grid用户不在dba的group,用户检测failed。

解决方法:

我们的方案采用oinstall属组,理论上可以忽略,我选择了忽略。

问题三、选择节点时,报网卡检测failed

解决方法:

ssh互信时,需要互信所有节点的public和private,包括节点自身的public和private。

ssh bildb1rac2 date

ssh bildb1rac2-priv date

ssh bildb1rac1 date

ssh bildb1rac1-priv date

问题四、私有网卡检测不到

解决方法:

再次执行时通过。

问题五、网络参数配置检测存在很多告警。

解决方法:

此处在OUI调用CVU做环境安装检测的过程中发现网络参数ndp_sendspace等的值不符合要求。通过手工检查发现系统已通过no配置参数,而no –a输出的结果和ifconfig –a看到的结果不一致。

        经过MOS上的文档PRVE-0273 : The value of network parameter "udp_sendspace" for interface "en0" is not configured to [文章 ID 1373242.1]确认,该问题是oracle的一个bug.通过做一下更改可解决:

        1、修改/etc/rc.net文件

        if [ -f /usr/sbin/no ] ; then

         /usr/sbin/no -p -o tcp_ephemeral_low=9000

        /usr/sbin/no -p -o udp_ephemeral_low=9000

        /usr/sbin/no -p -o tcp_ephemeral_high=65500

        /usr/sbin/no -p -o udp_ephemeral_high=65500

        /usr/sbin/no -p -o udp_sendspace=65536

        /usr/sbin/no -p -o udp_recvspace=655360

        /usr/sbin/no -p -o tcp_sendspace=65536

        /usr/sbin/no -p -o tcp_recvspace=65536

        /usr/sbin/no -p -o rfc1323=1

        /usr/sbin/no -p -o sb_max=4194304

        /usr/sbin/no -r -o ipqmaxlen=512

fi

2、建立软链接文件

ln -s /usr/sbin/no /etc/no

ln -s /usr/sbin/lsattr /etc/lsattr

如果做了以上修改,再次checkagain仍然发现网络参数告警,则可以忽略继续安装。

必须做以上更改,否在在oui完成后执行root.sh脚本的过程中会报错

检查rfc1323报如下错:

network parameter - rfc1323 - Checks if the network parameter is set correctly on the system

  Check Failed on Nodes: [esop_db2, ?esop_db1] 

Verification result of failed node: esop_db2

Expected Value :1

Actual Value

:en11=0

Details:

PRVE-0273 : The value of network parameter "rfc1323" for interface "en11" is not configured to the expected value on node "esop_db2".[Expected="1"; Found="en11=0"] ?- Cause:? ?- Action:?

Back to Top 

Verification result of failed node: esop_db1

Expected Value:1

Actual Value

:en11=0

Details:

-

PRVE-0273 : The value of network parameter "rfc1323" for interface "en11" is not configured to the expected value on node "esop_db1".[Expected="1"; Found="en11=0"] ?- Cause:? ?- Action:?

Back to Top 

解决方法:

使用aix命令
#smitty chinet
选择enX(出问题的en),然后修改以上两个参数,
再检查,就没有报错了。
en11 rfc1323=1

问题六、安装到选择asm磁盘时,出现键盘不能输入,无法修改磁盘组名,无法设置sys、system密码,无法继续。

鼠标可以点击其他地方,但点击到该输入框时没有任何反应。

不能设置密码

解决方法:

可能跟移动安全加固,以及操作系统有关,采用如下方法都没有解决:

1、将安装包放到另一台机器安装;

2、在操作系统安装JDK1.6

3、下载新的安装包,替换原安装包

4、确认ssh互信、用户属组id、磁盘属组、pvid等都没有问题。

5、采用网上解决方法,安装时执行如下命令:

xprop -root -remove _MOTIF_DEFAULT_BINDINGS

xprop -remove WM_LOCALE_NAME

xprop -root -remove XIM_SERVERS

6、安装过程trace跟踪,从跟踪日志也看不出来问题所在

./runInstaller -J-DTRACING.ENABLED=true -J-DTRACING.LEVEL=2

7、不通过4a,将笔记本搬到内网。

最终采用如下方法解决:

在本地开启Xmanager - Passive,然后通过终端登陆环境,设置DISPLAY参数:

export DISPLAY=10.154.54.164:0.0

执行runInstaller,安装界面在本地弹出,安装到ASM时正常通过。

问题七、安装完后,配置gi和cluster及verification utility不通过。

检查后台日志报错如下:

INFO: ERROR:

INFO: PRVG-1101 : SCAN name "bildb1rac-scan" failed to resolve

INFO: ERROR:

INFO: PRVF-4657 : Name resolution setup check for "bildb1rac-scan" (IP address: 10.154.50.158) failed

INFO: ERROR:

INFO: PRVF-4663 : Found configuration issue with the 'hosts' entry in the /etc/nsswitch.conf file

INFO: Verification of SCAN VIP and Listener setup failed

解决方法:

官方解释:因为没有采用DNS配置scan,只要在各节点能ping通scan就可以忽略。

Cause 1. SCAN name is expected to be resolved by local hosts file

SCAN name is resolved by local hosts file (/etc/hosts or %SystemRoot%\system32\drivers\etc\hosts) instead of DNS or GNS

Solution: Oracle strongly recommend to use DNS or GNS for SCAN name resolution as hosts file support only one IP for SCAN

If the intention is to use hosts file for SCAN name resolution, and ping command returns correct SCAN VIP, you can ignore the error and move forward.

If the intention is to use DNS or GNS for SCAN name resolution, comment out entries in local hosts file for SCAN name on all nodes, and re-run"$GRID_HOME/bin/cluvfy comp scan" to confirm.

问题八、数据库安装前检查告警

解决方法:

还是scan的问题,因为我们没有采用DNS,官方推荐采用DNS,只要scanip在所有节点都能ping通,可忽略该告警。

问题九、发现错误,截屏时,开启截屏软件picpick后,弹出的错误窗口丢失,然后主窗口无法继续,不能执行任何操作,非常尴尬。

该问题很严重,可能安装进行了一大半,结果只能关了重来,而且要把之前安装的东西都清理掉。

解决方法:尽量少截屏,或采用操作系统的全屏截取。

问题十、安装GI PSU 11.2.0.3.7的步骤说明

生成ocm文件要grid用户执行

$ORACLE_HOME/OPatch/ocm/bin/emocmrsp  -no_banner -output /oracle/ocm.rsp

安装时需要用root用户执行

opatch auto /oracle/software/112037 -ocmrf /oraclelog/ocm.rsp

问题十一、安装GI PSU 11.2.0.3.7时报错,

# /oracle/oracle_soft/OPatch/opatch auto /oracle/oracle_soft/ -ocmrf /home/grid/ocm_gi.rsp

Executing /oracle/app/11.2.0/grid/perl/bin/perl /oracle/oracle_soft/OPatch/crs/patch11203.pl -patchdir /oracle -patchn oracle_soft -ocmrf /home/grid/ocm_gi.rsp -paramfile /oracle/app/11.2.0/grid/crs/install/crsconfig_params

/crs/install/crsconfig_params

/crs/install/s_crsconfig_defs

This is the main log file: /oracle/oracle_soft/OPatch/crs/log/opatchauto2013-10-02_16-52-27.log

This file will show your detected configuration and all the steps that opatchauto attempted to do on your system: /oracle/oracle_soft/OPatch/crs/log/opatchauto2013-10-02_16-52-27.report.log

2013-10-02 16:52:27: Starting Clusterware Patch Setup

Using configuration parameter file: /oracle/app/11.2.0/grid/crs/install/crsconfig_params

Either  does not exist or is not readable

Make sure the file exists and it has read and execute access

Clusterware home location  does not exist

解决方法:

经查证,该问题为bug,不能采用opatch auto安装补丁,只能手工安装,安装方法如下:

Manual Steps for Apply/Rollback Patch

Steps for Applying the Patch

Note:

You must stop the EM agent processes running from the database home, prior to patching the Oracle RAC database or GI Home. Execute the following command on the node to be patched.

As the Oracle RAC database home owner execute:

$ <ORACLE_HOME>/bin/emctl stop dbconsole

Execute the following on each node of the cluster in non-shared CRS and DB home environment to apply the patch.

  1. Stop the CRS managed resources running from DB homes.

If this is a GI Home environment, as the database home owner execute:

$ <ORACLE_HOME>/bin/srvctl stop home -o <ORACLE_HOME> -s <status file location> -n <node name>

If this is an Oracle Restart Home environment, as the database home owner execute:

$ <ORACLE_HOME>/bin/srvctl stop home -o <ORACLE_HOME> -s <status file location>

Note:

You need to make sure that the Oracle ACFS file systems are unmounted (see My Oracle Support document 1494652.1 How to Mount or Unmount ACFS File System While Applying GI Patches?) and all other Oracle processes are shutdown before you proceed.

  1. Run the pre root script.

If this is a GI Home, as the root user execute:

# <GI_HOME>/crs/install/rootcrs.pl -unlock

If this is an Oracle Restart Home, as the root user execute:

# <GI_HOME>/crs/install/roothas.pl -unlock
  1. Apply the CRS patch using.

As the GI home owner execute:

$ <GI_HOME>/OPatch/opatch napply -oh <GI_HOME> -local <UNZIPPED_PATCH_LOCATION>/<GI_components_number>

As the GI home owner execute:

$ <GI_HOME>/OPatch/opatch apply -oh <GI_HOME> -local <UNZIPPED_PATCH_LOCATION>/<DB_PSU_number>
  1. Run the pre script for DB component of the patch.

As the database home owner execute:

$ <UNZIPPED_PATCH_LOCATION>/<GI_components_number>/custom/server/<GI_components_number>/custom/scripts/prepatch.sh -dbhome <ORACLE_HOME>
  1. Apply the DB patch.

As the database home owner execute:

$ <ORACLE_HOME>/OPatch/opatch napply -oh <ORACLE_HOME> -local <UNZIPPED_PATCH_LOCATION>/<GI_components_number>/custom/server/<GI_components_number>
$ <ORACLE_HOME>/OPatch/opatch apply -oh <ORACLE_HOME> -local <UNZIPPED_PATCH_LOCATION>/<DB_PSU_number>
  1. Run the post script for DB component of the patch.

As the database home owner execute:

$ <UNZIPPED_PATCH_LOCATION>/<GI_components_number>/custom/server/<GI_components_number>/custom/scripts/postpatch.sh -dbhome <ORACLE_HOME>
  1. Run the post script.

As the root user execute:

# <GI_HOME>/rdbms/install/rootadd_rdbms.sh

If this is a GI Home, as the root user execute:

# <GI_HOME>/crs/install/rootcrs.pl -patch

If this is an Oracle Restart Home, as the root user execute:

# <GI_HOME>/crs/install/roothas.pl -patch
  1. If the message, "A system reboot is recommended before using ACFS” is shown, then a reboot must be issued before continuing. Failure to do so will result in running with an unpatched ACFS\ADVM\OKS driver.
  2. Start the CRS managed resources that were earlier running from DB homes.

If this is a GI Home environment, as the database home owner execute:

$ <ORACLE_HOME>/bin/srvctl start home -o <ORACLE_HOME> -s <status file location> -n <node name>

If this is an Oracle Restart Home environment, as the database home owner execute:

$ <ORACLE_HOME>/bin/srvctl start home -o <ORACLE_HOME> -s <status file location> 

Steps for Rolling Back the Patch From a GI Home

Execute the following on each node of the cluster in non-shared CRS and DB home environment to rollback the patch.

  1. Stop the CRS managed resources running from DB homes.

If this is a GI Home environment, as the database home owner execute:

$ <ORACLE_HOME>/bin/srvctl stop home -o <ORACLE_HOME> -s <status file location> -n <node name>

If this is an Oracle Restart Home environment, as the database home owner execute:

$ <ORACLE_HOME>/bin/srvctl stop home -o <ORACLE_HOME> -s <status file location> 

Note:

You need to make sure that the Oracle ACFS file systems are unmounted (see My Oracle Support document 1494652.1 How to Mount or Unmount ACFS File System While Applying GI Patches?) and all other Oracle processes are shut down before you proceed.

  1. Run the pre root script.

If this is a GI Home, as the root user execute:

# <GI_HOME>/crs/install/rootcrs.pl -unlock

If this is an Oracle Restart Home, as the root user execute:

# <GI_HOME>/crs/install/roothas.pl -unlock
  1. Roll back the CRS patch.

As the GI home owner execute:

$ <GI_HOME>/OPatch/opatch rollback -local -id <GI_components_number> -oh <GI_HOME> 
$ <GI_HOME>/OPatch/opatch rollback -local -id <DB_PSU_number> -oh <GI_HOME> 
  1. Run the pre script for DB component of the patch.

As the database home owner execute:

$ <UNZIPPED_PATCH_LOCATION>/<GI_components_number>/custom/server/<GI_components_number>/custom/scripts/prepatch.sh -dbhome <ORACLE_HOME>
  1. Roll back the DB patch from the database home.

As the database home owner execute:

$ <ORACLE_HOME>/OPatch/opatch rollback -local -id <GI_components_number> -oh <ORACLE_HOME> 
$ <ORACLE_HOME>/OPatch/opatch rollback -local -id <DB_PSU_number> -oh <ORACLE_HOME>
  1. Run the post script for DB component of the patch.

As the database home owner execute:

$ <UNZIPPED_PATCH_LOCATION>/<GI_components_number>/custom/server/<GI_components_number>/custom/scripts/postpatch.sh -dbhome <ORACLE_HOME>
  1. Run the post script.

As the root user execute:

# <GI_HOME>/rdbms/install/rootadd_rdbms.sh

If this is a GI Home, as the root user execute:

# <GI_HOME>/crs/install/rootcrs.pl -patch

If this is an Oracle Restart Home, as the root user execute:

# <GI_HOME>/crs/install/roothas.pl -patch
  1. If the message, "A system reboot is recommended before using ACFS” is shown, then a reboot must be issued before continuing. Failure to do so will result in running with an unpatched ACFS\ADVM\OKS driver.
  2. Start the CRS managed resources that were earlier running from DB homes.

If this is a GI Home environment, as the database home owner execute:

$ <ORACLE_HOME>/bin/srvctl start home -o <ORACLE_HOME> -s <status file location> -n <node name>

If this is an Oracle Restart Home environment, as the database home owner execute:

$ <ORACLE_HOME>/bin/srvctl start home -o <ORACLE_HOME> -s <status file location> 

Patching an Oracle RAC Home Installation Manually

Note that USM only patches cannot be applied to a Database home.

  1. Run the pre script for DB component of the patch.

As the database home owner execute:

$ <UNZIPPED_PATCH_LOCATION>/<GI_components_number>/custom/server/<GI_components_number>/custom/scripts/prepatch.sh -dbhome <ORACLE_HOME>
  1. Apply the DB patch.

As the database home owner execute:

$ <ORACLE_HOME>/OPatch/opatch napply -oh <ORACLE_HOME> -local <UNZIPPED_PATCH_LOCATION>/<GI_components_number>/custom/server/<GI_components_number>
$ <ORACLE_HOME>/OPatch/opatch apply -oh <ORACLE_HOME> -local <UNZIPPED_PATCH_LOCATION>/<DB_PSU_number>
  1. Run the post script for DB component of the patch.

As the database home owner execute:

$ <UNZIPPED_PATCH_LOCATION>/<GI_components_number>/custom/server/<GI_components_number>/custom/scripts/postpatch.sh -dbhome <ORACLE_HOME>

Rolling Back the Patch from an Oracle RAC Home Installation Manually

  1. Run the pre script for DB component of the patch.

As the database home owner execute:

$ <UNZIPPED_PATCH_LOCATION>/<GI_components_number>/custom/server/<GI_components_number>/custom/scripts/prepatch.sh -dbhome <ORACLE_HOME>
  1. Roll back the DB patch from the database home.

As the database home owner execute:

$ <ORACLE_HOME>/OPatch/opatch rollback -local -id <GI_components_number> -oh <ORACLE_HOME>
$ <ORACLE_HOME>/OPatch/opatch rollback -local -id <DB_PSU_number> -oh <ORACLE_HOME>
  1. Run the post script for DB component of the patch.

As the database home owner execute:

$ <UNZIPPED_PATCH_LOCATION>/<GI_components_number>/custom/server/<GI_components_number>/custom/scripts/postpatch.sh -dbhome <ORACLE_HOME>

问题十二、手动安装PSU时,存在告警,可忽略

OPatch found the word "warning" in the stderr of the make command.

Please look at this stderr. You can re-run this make command.

Stderr output:

ld: 0711-773 WARNING: Object /oracle/app/11.2.0/grid/lib//libgeneric11.a[sdbgrfu.o], imported symbol timezone

        Symbol was expected to be local. Extra instructions

        are being generated to reference the symbol.

Composite patch 16619892 successfully applied.

OPatch Session completed with warnings.

Log file location: /oracle/app/11.2.0/grid/cfgtoollogs/opatch/opatch2013-10-02_22-32-38PM_1.log

OPatch completed with warnings.

问题十三、安装GRID时运行root.sh报错

# /oracle/app/11.2.0/grid/root.sh

Performing root user operation for Oracle 11g

The following environment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME=  /oracle/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The contents of "dbhome" have not changed. No need to overwrite.

The contents of "oraenv" have not changed. No need to overwrite.

The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Using configuration parameter file: /oracle/app/11.2.0/grid/crs/install/crsconfig_params

User ignored Prerequisites during installation

Failed to write the checkpoint:'' with status:FAIL.Error code is 256

Undefined subroutine &crsconfig_lib::dieformat called at /oracle/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 6135.

/oracle/app/11.2.0/grid/perl/bin/perl -I/oracle/app/11.2.0/grid/perl/lib -I/oracle/app/11.2.0/grid/crs/install /oracle/app/11.2.0/grid/crs/install/rootcrs.pl execution failed

解决办法:

In 10g it gives error like libskgxn* not found while running root.sh or upgrade script of 10g.

For this error if we search Oracle Metalink as per note ID 1382505.1 , it is suggested that do below as solution.

1. When deinstalling vendor clusterware, make sure all associated files are removed. In this case, remove the symlink /usr/sbin/cluster/utilities/cldomain

2. Clean up the failed GI installation via $GRID_HOME/deinstall/deinstall command or clean up manually follow DOCUMENT 1364419.1

3. Reinstall Grid Infrastructure

But if we contact AIX admin he will say that there is no HACMP related files as they have removed them previously. Then we have to follow below steps so that oracle will install with any error.

How to remove HACMP related files from AIX to install Oracle

Before following below steps , we need to confirm that no one is using HACMP on server and HACMP services are stopped on server and it is not required to run HACMP in future.

Step-1) cd /usr/sbin/cluster/utilities

             mv cldomain cldomain_orig

Step-2) Remove "hagsuser" group using smit security command

Step-3) cd /var/ha/soc

            rm -rf *clients*

Step-4) Modify rootpre.sh file by removing HACMP related part from this file and run rootpre.sh again.

Now we can re-install CRS/DB again.

However if someone is installing CRS/DB on test server and do not want to run installation again then below steps can be followed to run root.sh/rootupgrade.sh successfully.

Check if libskgxn* is pointing to non-existent softlink in /opt/ORCLcluster

$ ls -l /oracle/app/11.2.0.3/grid/lib/libskgxn*

lrwxrwxrwx 1 grid oinstall 33 Nov 23 03:08 /oracle/app/11.2.0.3/grid/lib/libskgxn2.so -> /opt/ORCLcluster/lib/libskgxn2.so

-rwxr-xr-x 1 grid oinstall 159806 Oct 20 23:55 /oracle/app/11.2.0.3/grid/lib/libskgxnr.a

lrwxrwxrwx 1 grid oinstall 33 Nov 23 09:38 /oracle/app/11.2.0.3/grid/lib/libskgxnr.so -> /opt/ORCLcluster/lib/libskgxnr.so

Then remove these soft-links and copy libskgxn* from some other server where you have successfully installed same oracle version.

After coping these libskgxn* file run root.sh/rootupgrade.sh again. And it will run without any issue.

:rmgroup hagsuser(组名)

# cd  /var/ha/soc

# mv -f haem haem_bak

问题十四、安装数据库软件报以下错:

Log日志如下:

INFO: Validating node readiness...

INFO: Preparing to check passwordless SSH Connectivity between nodes: [bildb2rac1, bildb2rac2]

INFO: Testing passwordless SSH connectivity between the selected nodes. This may take several minutes, please wait...

INFO: OverallStatus of User Equivalence check using CVU is OPERATION_FAILED

INFO: VerificationError:  Either a node failed in the middle of a manageability operation, or the communication between nodes was disrupted.

SEVERE: [FATAL] [INS-06006] Passwordless SSH connectivity not set up between the following node(s): [bildb2rac1].

   CAUSE: Either passwordless SSH connectivity is not setup between specified node(s) or they are not reachable. Refer to the logs for more details.

   ACTION: Refer to the logs for more details or contact Oracle Support Services.

解决方法:

之前SSH都验证过,可以不要密码过去的。

将oracle密码改成跟grid一样,且各节点密码一致,删除各节点下的

rm -rf .ssh

然后在重新配置ORACLE用户的等效性

oracle:

两个节点执行

mkdir /home/oracle/.ssh

chmod 700 /home/oracle/.ssh

ssh-keygen -t dsa

ssh-keygen -t rsa

cd /home/oracle/.ssh

cat id_dsa.pub > authorized_keys

cat id_rsa.pub >> authorized_keys

在节点1执行

ssh bildb2rac2 cat /home/oracle/.ssh/authorized_keys >> authorized_keys

scp authorized_keys bildb2rac2:/home/oracle/.ssh/

问题十四:GI升级时报错

# /oracle/app/11.2.0/grid/OPatch/opatch auto /home/oracle/112037 -ocmrf /oraclelog/ocm.rsp

Executing /oracle/app/11.2.0/grid/perl/bin/perl /oracle/app/11.2.0/grid/OPatch/crs/patch11203.pl -patchdir /home/oracle -patchn 112037 -ocmrf /oraclelog/ocm.rsp -paramfile /oracle/app/11.2.0/grid/crs/install/crsconfig_params

/oracle/app/11.2.0/grid/crs/install/crsconfig_params

/oracle/app/11.2.0/grid/crs/install/s_crsconfig_defs

This is the main log file: /oracle/app/11.2.0/grid/cfgtoollogs/opatchauto2013-10-04_13-59-37.log

This file will show your detected configuration and all the steps that opatchauto attempted to do on your system: /oracle/app/11.2.0/grid/cfgtoollogs/opatchauto2013-10-04_13-59-37.report.log

2013-10-04 13:59:37: Starting Clusterware Patch Setup

Using configuration parameter file: /oracle/app/11.2.0/grid/crs/install/crsconfig_params

Not able to retreive database home information

解决方法:

启动CRS资源

问题十五、安装数据库时报错如下:

在安装oracle spatial组件时报错如下:

ERROR at line 1:

ORA-31061: XDB error: XML event error

ORA-19202: Error occurred in XML processing

In line 18 of orastream:

在./oracle/cfgtoollogs/dbca/bildb2/spatial.log发现如下错误:

INSERT INTO mdsys.OpenLS_Nodes (

                  *

ERROR at line 1:

ORA-31061: XDB error: XML event error

ORA-19202: Error occurred in XML processing

In line 18 of orastream:

解决方法:

由于在安装oracle spatial组件时遭遇Bug 12645603,原因是数据库没有使用AL32UTF8字符集导致XML分析器引发的BUG该BUG至今未解决,后查询生产库没有使用该组件(查询SQL:select  comp_name,version,status from dba_registry;),故将其从选项中删除在重装。

官方建议:

1. remove spatial
conn / as sysdba
drop user MDSYS cascade;

Optionally drop all remaining public synonyms created for Spatial:
set pagesize 0
set feed off
spool dropsyn.sql
select 'drop public synonym "' || synonym_name || '";' from dba_synonyms where table_owner='MDSYS';
spool off;
@dropsyn.sql

Spatial also creates a few user schemas during installation which can be dropped as well:

drop user mddata cascade;
-- Only created as of release 11g:
drop user spatial_csw_admin_usr cascade;
drop user spatial_wfs_admin_usr cascade;

2. enable the event
alter session set event='31156 trace name context forever, level 0x400';

3. Install Spatial by executing the steps shown below. Note you need to run this as a SYSDBA user

spool spatial_installation.lst
set echo on
@?/md/admin/mdinst.sql
spool off

It is strongly recommended that the MDSYS user account remains locked. The MDSYS user is
created with administrator privileges; therefore, it is important to protect this account from unauthorized
use. To lock the MDSYS user, connect as SYS and enter the following command:

  alter user MDSYS account lock;


4. Execute the following steps to verify if Spatial is installed correctly:

  connect / as sysdba 
  set serveroutput on
  execute validate_sdo;
  select comp_id, control, schema, version, status, comp_name from dba_registry
  where comp_id='SDO';
  select object_name, object_type, status from dba_objects
  where owner='MDSYS' and status <> 'VALID'
  order by object_name;

官档:

DBCA Raises Errors ORA-31061 ORA-19202 During The Spatial Installation (文档 ID 1589593.1)

问题十六:GI升级报错:unable to get oracle owner for…

# /oracle/app/11.2.0/grid/OPatch/opatch auto /home/oracle/112037  -ocmrf /oracle/ocm_file/ocm.rsp

Executing /oracle/app/11.2.0/grid/perl/bin/perl /oracle/app/11.2.0/grid/OPatch/crs/patch11203.pl -patchdir /home/oracle -patchn 112037 -ocmrf /oracle/ocm_file/ocm.rsp -paramfile /oracle/app/11.2.0/grid/crs/install/crsconfig_params

/oracle/app/11.2.0/grid/crs/install/crsconfig_params

/oracle/app/11.2.0/grid/crs/install/s_crsconfig_defs

This is the main log file: /oracle/app/11.2.0/grid/cfgtoollogs/opatchauto2013-10-11_22-01-07.log

This file will show your detected configuration and all the steps that opatchauto attempted to do on your system: /oracle/app/11.2.0/grid/cfgtoollogs/opatchauto2013-10-11_22-01-07.report.log

2013-10-11 22:01:07: Starting Clusterware Patch Setup

Using configuration parameter file: /oracle/app/11.2.0/grid/crs/install/crsconfig_params

unable to get oracle owner for /oracle/app/oracle/product/11.2.0/db

原因及解决方法:

是由于GI补丁ROLLBACK时删掉了DB下的一些文件($ORACLE_HOME/bin/oracle)导致,此时在DB下sqlplus / as sysdba不会连接到空闲实例。

将之前的oracle目录复制回来就OK了。

问题十七:在打小补丁11072246运行脚本报错如下:

SQL>  @?/rdbms/admin/prvtstat.plb

Warning: Package Body created with compilation errors.

Errors for PACKAGE BODY DBMS_STATS:

LINE/COL ERROR

-------- -----------------------------------------------------------------

16611/7  PL/SQL: Statement ignored

16611/38 PLS-00302: component 'GET_IDX_TABPART' must be declared

解决方法:

运行以下脚本

@?/rdbms/admin/catalog.sql;

@?/rdbms/admin/catproc.sql;

然后在重跑失败脚本

问题十八:安装GRID时运行root.sh报错如下,但root.sh脚本运行成功:

root@testdb1:/oraclelog/grid#/oracle/app/11.2.0/grid/root.sh

Performing root user operation for Oracle 11g

The following environment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME=  /oracle/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:

Creating /usr/local/bin directory...

   Copying dbhome to /usr/local/bin ...

   Copying oraenv to /usr/local/bin ...

   Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Using configuration parameter file: /oracle/app/11.2.0/grid/crs/install/crsconfig_params

Creating trace directory

User ignored Prerequisites during installation

/oracle/app/11.2.0/grid/bin/lsdb.bin: Failed to initialize Cluster Context

skgxn error number 1311719766

  operation skgxnqtsz

  location SKGXN not av

errno 0: Error 0

/oracle/app/11.2.0/grid/bin/lsdb.bin: Cannot allocate memory of size 0

User grid has the required capabilities to run CSSD in realtime mode

OLR initialization - successful

  root wallet

  root wallet cert

  root cert export

  peer wallet

  profile reader wallet

  pa wallet

  peer wallet keys

  pa wallet keys

  peer cert request

  pa cert request

  peer cert

  pa cert

  peer root cert TP

  profile reader root cert TP

  pa root cert TP

  peer pa cert TP

  pa peer cert TP

  profile reader pa cert TP

  profile reader peer cert TP

  peer user cert

  pa user cert

Adding Clusterware entries to inittab

 CRS-2672: Attempting to start 'ora.mdnsd' on 'testdb1'

CRS-2676: Start of 'ora.mdnsd' on 'testdb1' succeeded

CRS-2672: Attempting to start 'ora.gpnpd' on 'testdb1'

CRS-2676: Start of 'ora.gpnpd' on 'testdb1' succeeded

CRS-2672: Attempting to start 'ora.cssdmonitor' on 'testdb1'

CRS-2672: Attempting to start 'ora.gipcd' on 'testdb1'

CRS-2676: Start of 'ora.cssdmonitor' on 'testdb1' succeeded

CRS-2676: Start of 'ora.gipcd' on 'testdb1' succeeded

CRS-2672: Attempting to start 'ora.cssd' on 'testdb1'

CRS-2672: Attempting to start 'ora.diskmon' on 'testdb1'

CRS-2676: Start of 'ora.diskmon' on 'testdb1' succeeded

CRS-2676: Start of 'ora.cssd' on 'testdb1' succeeded

clscfg: -install mode specified

Successfully accumulated necessary OCR keys.

Creating OCR keys for user 'root', privgrp 'system'..

Operation successful.

Now formatting voting disk: /ocrvote/votedisk1.

Now formatting voting disk: /ocrvote/votedisk2.

Now formatting voting disk: /ocrvote/votedisk3.

CRS-4603: Successful addition of voting disk /ocrvote/votedisk1.

CRS-4603: Successful addition of voting disk /ocrvote/votedisk2.

CRS-4603: Successful addition of voting disk /ocrvote/votedisk3.

##  STATE    File Universal Id                File Name Disk group

--  -----    -----------------                --------- ---------

 1. ONLINE   06b1309274a14fd4bf10b941d6a20f18 (/ocrvote/votedisk1) []

 2. ONLINE   99cdc9ce2c324f83bfa883078d6d99eb (/ocrvote/votedisk2) []

 3. ONLINE   3d57808180d04fd7bf37263930cc6013 (/ocrvote/votedisk3) []

Located 3 voting disk(s).

 Configure Oracle Grid Infrastructure for a Cluster ... succeeded

错误原因:

这个错误的信息是Oracle发现有 /opt/ORCLcluster/lib 存在,会去创建进程和HACMP的进程进行交互,但是如果找不到进程就会出现看到的错误 

Cannot allocate memory of size 0 

这个文件会误导Oracle认为您的系统中有HACMP 

如果root.sh运行成功,对GI的运行就没有影响了,但是后期升级操作(运行rootupgrade.sh)还是会碰见这个错误,但是不会影响到升级。

如果不使用HACMP,请干净的删除掉HACMP并清理掉 /opt/ORCLcluster/lib 

Bug 12845887中有对这个现象进行解释: 
Bug 12845887 : AIX-11203-UD:LSDB.BIN FAIL TO INITIALIZE CLUSTER CONTEXT WHEN RUN ROOTUPGRADE.SH 

@ There is configuration error. 
@ . 
@ This node does not run HACMP. But /opt/ORCLcluster/lib exists (presumably 
@ from old installs -- Feb 12). 
@ . 
@ It looks like when the old installs were done, HACMP was present and later it 
@ was removed. 
@ . 
@ Clean up /opt/ORCLcluster/lib directory. 
@ . 
@ lsdb will fail with SKGXN error in a non-HACMP environment. That is the 
@ behavior. 
@ . 

解决方法:

root@testdb1:/oracle/app/oraInventory/logs#ls -lrt /opt/ORCLcluster/lib

total 0

lrwxrwxrwx 1 bin bin 36 May 06 14:59 libskgxnr.so-050814_190238 -> /opt/VRTSvcs/rac/lib64/libvcsmm_r.so

lrwxrwxrwx 1 bin bin 34 May 06 14:59 libskgxn2.so-050814_190238 -> /opt/VRTSvcs/rac/lib64/libvcsmm.so

lrwxrwxrwx 1 bin bin 34 May 06 14:59 libskgxn2.a-050814_190238 -> /opt/VRTSvcs/rac/lib64/libskgxn2.a

lrwxrwxrwx 1 root system 34 May 08 19:02 libskgxnr.so -> /opt/VRTSvcs/rac/lib64/libvcsmm.so

lrwxrwxrwx 1 root system 34 May 08 19:02 libskgxn2.so -> /opt/VRTSvcs/rac/lib64/libvcsmm.so

lrwxrwxrwx 1 root system 34 May 08 19:02 libskgxn2.a -> /opt/VRTSvcs/rac/lib64/libskgxn2.a

lrwxrwxrwx 1 root system 34 May 08 19:02 libskgxnr.a -> /opt/VRTSvcs/rac/lib64/libskgxn2.a

由于/opt/ORCLcluster/lib目录下的文件均为赛门铁克文件系统的LIB包的软链接,不能删除,故只要root.sh能运行成功,该错误可以忽略。

问题十九:安装DB时安装界面无节点显示

由于inventory.xml的CRS_HOME不正确或没有CRS="true"导致

cat /oracle/app/oraInventory/ContentsXML/inventory.xml

<?xml version="1.0" standalone="yes" ?>

<!-- Copyright (c) 1999, 2013, Oracle and/or its affiliates.

All rights reserved. -->

<!-- Do not modify the contents of this file by hand. -->

<INVENTORY>

<VERSION_INFO>

   <SAVED_WITH>11.2.0.4.0</SAVED_WITH>

   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>

</VERSION_INFO>

<HOME_LIST>

<HOME NAME="Ora11g_gridinfrahome1" LOC="/oracle/app/11.2.0/grid" TYPE="O" IDX="1" CRS="true">

   <NODE_LIST>

      <NODE NAME="esop_db1"/>

      <NODE NAME="esop_db2"/>

   </NODE_LIST>

</HOME>

</HOME_LIST>

<COMPOSITEHOME_LIST>

</COMPOSITEHOME_LIST>

</INVENTORY>

注:如果没有CRS="true"就显示不了节点

如果不正确使用如下命令更改:

/oracle/app/11.2.0/grid/oui/bin/runInstaller -updateNodeList ORACLE_HOME="/oracle/app/11.2.0/grid" CRS=true

问题二十:SCAN IP不能有下划线,如果主机名有下划线等特殊字符,建议修改主机名

修改主机名方法:

AIX:

#smit hostname
  Set the Hostname
#vi /etc/hosts

HP UX:

1)检查/etc/hostshostname是否存在

2)/etc/rc.config.d/netconf中修改

3)/sbin/hostname <new_name> 修改

4)uname -S <new_name> 修改 (主机名长度有限制)

5)set_parms hostname <new_name> 修改 (主机名长度无限制)

6) SAM 修改

SOLARIS:

编辑文件/etc/nodename,输入新的文件名即可,重起或在命令行下执行“ hostname 新的主机名”,即可看到新的主机名已经生效了;

确保下面文件中的主机名一致

/etc/hosts

/etc/hostname.hme0

/etc/nodename

/etc/net/ticots/hosts

/etc/net/ticotsord/hosts

/etc/net/tilts/hosts

LINUX:

通过命令修改主机名

hostname #查看当前主机的主机名

hostname NEWHOSTNAME #临时修改当前主机名

通过配置文件修改主机名

vi /etc/sysconfig/network #通过配置文件修改主机名

NETWORKING=yes

HOSTNAME=NEWHOSTNAME #修改该值作为主机名,如:NEWPC

问题二十一:打完GI PSU11.2.0.4.2之后,启动数据库报

ORA-15025: could not open disk "/dev/rhdiskpower13"

ORA-27041: unable to open file

详细错误如下:

ALTER DATABASE   MOUNT

This instance was first to mount

NOTE: Loaded library: System

ORA-15025: could not open disk "/dev/rhdiskpower13"

ORA-27041: unable to open file

IBM AIX RISC System/6000 Error: 13: Permission denied

Additional information: 11

ORA-15025: could not open disk "/dev/rhdiskpower14"

ORA-27041: unable to open file

IBM AIX RISC System/6000 Error: 13: Permission denied

Additional information: 11

ORA-15025: could not open disk "/dev/rhdiskpower6"

ORA-27041: unable to open file

IBM AIX RISC System/6000 Error: 13: Permission denied

Additional information: 11

ORA-15025: could not open disk "/dev/rhdiskpower7"

ORA-27041: unable to open file

IBM AIX RISC System/6000 Error: 13: Permission denied

Additional information: 11

ORA-15025: could not open disk "/dev/rhdiskpower8"

ORA-27041: unable to open file

IBM AIX RISC System/6000 Error: 13: Permission denied

Additional information: 11

ORA-15025: could not open disk "/dev/rhdiskpower9"

ORA-27041: unable to open file

IBM AIX RISC System/6000 Error: 13: Permission denied

Additional information: 11

SUCCESS: diskgroup SYSDG was dismounted

ERROR: diskgroup SYSDG was not mounted

ORA-00210: cannot open the specified control file

ORA-00202: control file: ''+SYSDG/esopdb/controlfile/current.260.850493577''

ORA-17503: ksfdopn:2 Failed to open file +SYSDG/esopdb/controlfile/current.260.850493577

ORA-15001: diskgroup "SYSDG" does not exist or is not mounted

ORA-15040: diskgroup is incomplete

ORA-15040: diskgroup is incomplete

ORA-15040: diskgroup is incomplete

ORA-15040: diskgroup is incomplete

ORA-15040: diskgroup is incomplete

ORA-15040: diskgroup is incomplete

ORA-205 signalled during: ALTER DATABASE   MOUNT...

Wed Jun 18 15:48:41 2014

ALTER SYSTEM SET local_listener='' (ADDRESS=(PROTOCOL=TCP)(HOST=10.154.52.122)(PORT=1521))'' SCOPE=MEMORY SID=''esopdb2'';

Wed Jun 18 15:48:51 2014

Reconfiguration started (old inc 2, new inc 4)

List of instances:

 1 2 (myinst: 2)

 Global Resource Directory frozen

 Communication channels reestablished

Wed Jun 18 15:48:52 2014

 * domain 0 valid = 0 according to instance 1

 Master broadcasted resource hash value bitmaps

 Non-local Process blocks cleaned out

原因:

磁盘组的权限是 grid:asmadmin  660

打完PSU之后这个oracle执行文件权限为oracle:oinstall,所以没有磁盘读写权限

解决方法:

修改$ORACLE_HOME/bin/oracle执行文件的权限,正确的权限应该是oracle:asmadmin

su – root

cd $ORACLE_HOME/bin/

chown oracle: asmadmin oracle

问题二十二:安装DB报/tmp没权限

详细报错如下:

Cause: Failed to access the temporary location. 

Action:Ensure that the current user has required permissions to access the temporary location.  Additional Information:

The work directory "/tmp/" cannot be used Summary of the failed nodes jsljcj_db

The work directory "/tmp/" cannot be used on node "jsljcj_db"

原因:

由于之前装过库,同事在后面再一次进行库安装的时候直接删除ORACLE用户并创建新ORACLE用户,新的ORACLE用户的用户ID及组ID跟老的不一致,导致在/tmp目录下的以下3个文件ORACLE没权限读写。

logs

CVU_11.2.0.4.0_oracle

CVU_11.2.0.4.0_oracle_fixup

解决方法:

su – root

cd /tmp

chown -Rf oracle:oinstall logs

chown -Rf oracle:oinstall CVU_11.2.0.4.0_oracle

chown -Rf oracle:oinstall CVU_11.2.0.4.0_oracle_fixup

问题二十三:打PSU10过程中日志报如下错,PSU10补丁安装失败

 Patching component oracle.usm, 11.2.0.3.0...

 The following actions have failed:

 Copy failed from '/oracle/app/11.2.0/grid/.patch_storage/16619898_Jun_26_2013_21_46_53/files/lib/libeons.so' to '/oracle/app/11.2.0/grid/lib/libeons.so'...

 Copy failed from '/oracle/app/11.2.0/grid/.patch_storage/16619898_Jun_26_2013_21_46_53/files/lib/libhasgen11.so' to '/oracle/app/11.2.0/grid/lib/libhasgen11.so'...

 Copy failed from '/oracle/app/11.2.0/grid/.patch_storage/16619898_Jun_26_2013_21_46_53/files/lib/libocr11.so' to '/oracle/app/11.2.0/grid/lib/libocr11.so'...

 Copy failed from '/oracle/app/11.2.0/grid/.patch_storage/16619898_Jun_26_2013_21_46_53/files/lib/libocrb11.so' to '/oracle/app/11.2.0/grid/lib/libocrb11.so'...

 Copy failed from '/oracle/app/11.2.0/grid/.patch_storage/16619898_Jun_26_2013_21_46_53/files/lib/libocrutl11.so' to '/oracle/app/11.2.0/grid/lib/libocrutl11.so'...

 Do you want to proceed? [y|n]

 N (auto-answered by -silent)

 User Responded with: N

 ApplySession failed in system modification phase... 'Copy failed from '/oracle/app/11.2.0/grid/.patch_storage/16619898_Jun_26_2013_21_46_53/files/lib/libeons.so' to '

/oracle/app/11.2.0/grid/lib/libeons.so'...

 Copy failed from '/oracle/app/11.2.0/grid/.patch_storage/16619898_Jun_26_2013_21_46_53/files/lib/libhasgen11.so' to '/oracle/app/11.2.0/grid/lib/libhasgen11.so'...

 Copy failed from '/oracle/app/11.2.0/grid/.patch_storage/16619898_Jun_26_2013_21_46_53/files/lib/libocr11.so' to '/oracle/app/11.2.0/grid/lib/libocr11.so'...

 Copy failed from '/oracle/app/11.2.0/grid/.patch_storage/16619898_Jun_26_2013_21_46_53/files/lib/libocrb11.so' to '/oracle/app/11.2.0/grid/lib/libocrb11.so'...

 Copy failed from '/oracle/app/11.2.0/grid/.patch_storage/16619898_Jun_26_2013_21_46_53/files/lib/libocrutl11.so' to '/oracle/app/11.2.0/grid/lib/libocrutl11.so'...

 '

 Restoring "/oracle/app/11.2.0/grid" to the state prior to running NApply...

 OPatch failed to restore the files from backup area. Not running "make".

原因:

文件copy失败的原因主要是lib包在内存中被LOCK导致。

解决方法:

1、在打PSU10的时候进行至打GI_HOME时,会自动停掉CRS,停掉CRS后程序会自动做一次slibclean,但是加载入内存中的lib

可能不会被清除出去,此时需要我们手动多次做slibclean,加载入内存中的lib包被清除出内存,补丁升级成功。

(在运行opatch auto 的同时,运行一个shell 脚本,该脚本每秒钟运行一次命令/usr/sbin/slibclean,在opatch auto 结束后,终止该脚本。)

2、就算我们多次做slibclean,内存中的lib包任然在内存中,此时需要将内存中的LIB进行MOVE操作,有助于内存中的LIB包被释放。

例:

$ cd $ORACLE_HOME/lib

$ mv libjox9.a libjox9.a.orig


3、在提示copy 文件失败时使用下面的命令查看对应的lib 文件是否依然存在于os system cache

# genkld -d | grep /oracle/app/11.2.0/grid/lib/libeons.so

# fuser -fk /oracle/app/11.2.0/grid/lib/libeons.so <<<<<<<<<<<< 如果这一步中找到了一些进程正在使用对应的文件,考虑结束对应的进程。

另:如果上述解决方法都失败考虑使用手动的方式应用PSU 10 具体步骤参考PSU10readme中手动应用补丁章节。

详细请见:

Opatch Fails to Replace libjox*.a While Applying One Off Patch on AIX Platform (文档 ID 779083.1)

AIX: Apply PSU or Interim patch Fails with Copy Failed as TFA was Running (文档 ID 1668630.1)

问题二十四:安装DB软件时./runInstaller报如下错:

Has 'rootpre.sh' been run by root? [y/n] (n)

y

Error in GetCurrentDir(): 13

Error in GetCurrentDir(): 13

Error in GetCurrentDir(): 13

Error in GetCurrentDir(): 13

Starting Oracle Universal Installer...

Error returned::: A file or directory in the path name does not exist.

Checking temp space: 0 MB available, 190 MB required.The file access permissions do not allow the specified action.

sh: /command_output_1836066: 0403-005 Cannot create the specified file.

    Failed <<<<

Checking swap space: 0 MB available, 150 MB required.    Failed <<<<

Checking monitor: must be configured to display at least 256 colorsPermission denied

sh: /command_output_1836066: cannot create

Permission denied

sh: /command_output_1836066: cannot create

    >>> Could not execute /usr/bin/X11/xdpyinfo    Failed <<<<

Some requirement checks failed. You must fulfill these requirements before

continuing with the installation,

Continue? (y/n) [n] n

原因:

找不到当前目录导致,后在/oracle运行du命令都找不到目录

P570f_jfyf:/oracle$du -sg *

du: 0653-175 Cannot find the current directory.

将安装软件移到其他本地目录安装即可

详见:

While Installing Oracle software facing "Error in GetCurrentDir(): 13" (文档 ID 378393.1)

Error Running ./runInstaller: Error In Getcurrentdir(): 13 (文档 ID 563296.1)

问题二十五:安装GI软件报PRVF-5149 : WARNING: Storage "/dev/rhdiskpower6" is not shared on all nodes

详细报错如下:

Device Checks for ASM - This is a pre-check to verify if the specified devices meet the requirements for configuration through the Oracle Universal Storage Manager Configuration Assistant.

Error:

-"/dev/rhdiskpower5" is not shared

- Cause:Cause Of Problem Not Available

- Action:User Action Not Available

-"/dev/rhdiskpower6" is not shared

- Cause:Cause Of Problem Not Available

- Action:User Action Not Available

-"/dev/rhdiskpower4" is not shared

- Cause:Cause Of Problem Not Available

- Action:User Action Not Available

  Check Failed on Nodes: [jfmydb2, jfmydb1] 

Verification result of failed node: jfmydb2

Details:

-PRVF-5149 : WARNING: Storage "/dev/rhdiskpower6" is not shared on all nodes

- Cause:

- Action:

-PRVF-5149 : WARNING: Storage "/dev/rhdiskpower5" is not shared on all nodes

- Cause:

- Action:

-PRVF-5149 : WARNING: Storage "/dev/rhdiskpower4" is not shared on all nodes

- Cause:

- Action:

Back to Top 

Verification result of failed node: jfmydb1

报错截图如下:

处理方法:

  1. 确保对应的磁盘所属用户、组及权限在所有节点均一致,且其用户ID,组ID一致,ASM有读写权限。
  2. 在所有节点做DD操作,确认是否能正常读写。
  3. 磁盘共享参数已设置且PVID已清除

只要以上3点均正常,可以忽略该WARNING

官档:

PRVF-5149 : WARNING: Storage "/dev/xxx" is Not Shared on All Nodes (文档 ID 1499523.1)

  • 7
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

南風_入弦

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值