RAC 下expdp impdp 并行 parallel FK statistics

1. dump 文件非共享下的并行

Customer receives the following errors:

ORA-31693: Table data object "<SCHEMA_NAME>"."<TABLE_NAME>" failed to load/unload and is being skipped due to error:
ORA-31617: unable to open dump file "<dumpfile name and path>" for write
ORA-19505: failed to identify file "<dumpfile name and path>"
ORA-27037: unable to obtain file status
Solaris-AMD64 Error: 2: No such file or directory
Additional information: 3


Note:
It is possible for this to occur on other operating systems since it is a mount point. The OS specific errors may therefore be different.
 

CHANGES

CAUSE

The problem occurs when Datapump Export is being performed on a multi-node RAC where the dumpfile destination is not shared to all nodes for access.  Since multiple nodes will be running the Datapump job, ALL nodes must have access to the mount point where the dump file will be written.
 
The issue is addressed in the following bug report which was closed with status 'Not a Bug':
Bug 11677316 - DATA PUMP UNABLE TO OPEN DUMP FILE ORA-31617 ORA-19505 ORA-27037
 

SOLUTION

1. Share/mount the dumpfile destination with all RAC nodes performing the expdp

- OR -

2. Use CLUSTER=N during Datapump so it will only run on the node which has the mount point and permissions to write to it.

2. FK imp默认没有先后顺序,需要手动disable FK

Errors like the following are reported in the DataPump import log:

ORA-31693: Table data object "<SCHEMA_NAME>"."<TABLE_NAME>" failed to load/unload and is being skipped due to error:
ORA-2291: integrity constraint ("<SCHEMA_NAME>"."<FK_CONSTRAINT_NAME>") violated - parent key not found


The issue can be reproduced with the following test case:

-- create tables (schema <SCHEMA_NAME>)

CREATE TABLE DEPT
(
   DEPTNO    NUMBER(2)    CONSTRAINT PK_DEPT PRIMARY KEY,
   DNAME     VARCHAR2(14),
   LOC       VARCHAR2(13)
);

CREATE TABLE EMP
(
   EMPNO     NUMBER(4)    CONSTRAINT PK_EMP PRIMARY KEY,
   ENAME     VARCHAR2(10),
   JOB       VARCHAR2(9),
   MGR       NUMBER(4),
   HIREDATE  DATE,
   SAL       NUMBER(7,2),
   COMM      NUMBER(7,2),
   DEPTNO    NUMBER(2)    CONSTRAINT FK_DEPTNO REFERENCES fDEPT
);

-- run the import

#> impdp dumpfile=const.dmp logfile=constimp.log REMAP_SCHEMA=<SOURCE_SCHEMA>:<TARGET_SCHEMA> TABLE_EXISTS_ACTION= APPEND


You may receive errors like:

ORA-31693: Table data object "<SCHEMA_NAME>"."<TABLE_NAME>" failed to load/unload and is being skipped due to error:
ORA-2291: integrity constraint ("<SCHEMA_NAME>"."<FK_CONSTRAINT_NAME>") violated - parent key not found
. . imported "<SCHEMA_NAME>"."<TABLE_NAME>"          5.656 KB   4   rows imported

CHANGES

CAUSE

This issue is documented in
Bug 6242277 - DATA PUMP IMPORTS FIRST CHILD ROWS AND THEN PARENT ROWS
closed with status 'Not a Bug'.
 

SOLUTION

This is an expected behavior already documented in Oracle utilities guide:
Data Pump Import

Please also refer to Oracle® Database Utilities 11g Release 2 (11.2)
Part Number E22490-04

To implement the solution, please use any of the following alternatives:

  • If you have data that must be loaded but may cause constraint violations, consider disabling the constraints, loading the data, and then deleting the problem rows before reenabling the constraints

    - OR -
     
  • Import the tables separately

3  PARTITION_OPTIONS=MERGE分区表变non 分区表

How to convert a partitioned table to a non-partitioned table using DataPump.

SOLUTION

A new import DataPump parameter PARTITION_OPTIONS has been introduced with 11g. The allowed values are:

NONE - Creates tables as they existed on the system from which the export operation was performed. This is the default value.

DEPARTITION - Promotes each partition or subpartition to a new individual table. The default name of the new table will be the concatenation of the table and partition name or the table and subpartition name, as appropriate.

MERGE - Combines all partitions and subpartitions into one table.

The parameter PARTITION_OPTIONS specifies how table partitions should be created during an import operation. To convert a partitioned table to a non-partitoned table we have to use PARTITION_OPTIONS=MERGE during the process of import.

The below example illustrates how to convert partitioned table to a non-partitioned table using expdp/impdp.

1. Create a partitioned table and insert values into the partitioned table

connect scott/<PASSWORD>
create table part_tab
(
   year    number(4),
   product varchar2(10),
   amt     number(10,2)
)
partition by range (year)
(
   partition p1 values less than (1992) tablespace u1,
   partition p2 values less than (1993) tablespace u2,
   partition p3 values less than (1994) tablespace u3,
   partition p4 values less than (1995) tablespace u4,
   partition p5 values less than (MAXVALUE) tablespace u5
);

select * from PART_TAB;

YEAR       PRODUCT    AMT
---------- ---------- ----------
      1992 p1                100
      1993 p2                200
      1994 p3                300
      1995 p4                400
      2010 p5                500

select OWNER, TABLE_NAME, PARTITIONED
from   dba_tables
where  table_name = 'PART_TAB' and owner = 'SCOTT';

OWNER                          TABLE_NAME PAR
------------------------------ ---------- ---
SCOTT                          PART_TAB   YES

select TABLE_OWNER, TABLE_NAME, PARTITION_NAME, TABLESPACE_NAME
from   dba_tab_partitions
where  TABLE_NAME = 'PART_TAB' and TABLE_OWNER = 'SCOTT';

TABLE_OWNER                    TABLE_NAME PARTITION_ TABLESPACE
------------------------------ ---------- ---------- ----------
SCOTT                          PART_TAB   P1         U1
SCOTT                          PART_TAB   P2         U2
SCOTT                          PART_TAB   P3         U3
SCOTT                          PART_TAB   P4         U4
SCOTT                          PART_TAB   P5         U5


2. Export the partitioned table:

#> expdp TABLES=scott.part_tab USERID="' / as sysdba'" DIRECTORY=test_dir DUMPFILE=part_tab.dmp LOGFILE=part_tab.log

Export: Release 11.2.0.2.0 - Production on Thu Dec 23 08:27:24 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYS"."SYS_EXPORT_TABLE_01": TABLES=scott.part_tab USERID="/******** AS SYSDBA" DIRECTORY=test_dir DUMPFILE=part_tab.dmp LOGFILE=part_tab.log
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 32 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "SCOTT"."PART_TAB":"P2" 5.898 KB 1 rows
. . exported "SCOTT"."PART_TAB":"P3" 5.898 KB 1 rows
. . exported "SCOTT"."PART_TAB":"P4" 5.898 KB 1 rows
. . exported "SCOTT"."PART_TAB":"P5" 5.914 KB 2 rows
. . exported "SCOTT"."PART_TAB":"P1" 0 KB 0 rows
Master table "SYS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
  /tmp/part_tab.dmp
Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at 08:28:02


3. Import the table in user "USER2" to convert the partitioned table into a non-partitioned table:

#> impdp USERID="'/ as sysdba'" TABLES=scott.part_tab DIRECTORY=test_dir DUMPFILE=part_tab.dmp LOGFILE=imp_part_tab.log REMAP_SCHEMA=scott:user2 PARTITION_OPTIONS=merge

Import: Release 11.2.0.2.0 - Production on Thu Dec 23 08:39:08 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYS"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "SYS"."SYS_IMPORT_TABLE_01": USERID="/******** AS SYSDBA" TABLES=scott.part_tab DIRECTORY=test_dir DUMPFILE=part_tab.dmp LOGFILE=imp_part_tab.log REMAP_SCHEMA=scott:user2 PARTITION_OPTIONS=merge
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "USER2"."PART_TAB":"P2" 5.898 KB 1 rows
. . imported "USER2"."PART_TAB":"P3" 5.898 KB 1 rows
. . imported "USER2"."PART_TAB":"P4" 5.898 KB 1 rows
. . imported "USER2"."PART_TAB":"P5" 5.914 KB 2 rows
. . imported "USER2"."PART_TAB":"P1" 0 KB 0 rows
Job "SYS"."SYS_IMPORT_TABLE_01" successfully completed at 08:39:17

select * from user2.part_tab;

YEAR       PRODUCT    AMT
---------- ---------- ----------
      1992 p1                100
      1993 p2                200
      1994 p3                300
      1995 p4                400
      2010 p5                500

select OWNER, TABLE_NAME, PARTITIONED
from   dba_tables
where  table_name = 'PART_TAB' and owner = 'USER2';

OWNER                          TABLE_NAME PAR
------------------------------ ---------- ---
USER2                          PART_TAB   NO

select TABLE_OWNER, TABLE_NAME, PARTITION_NAME, TABLESPACE_NAME
from   dba_tab_partitions
where  TABLE_NAME = 'PART_TAB' and TABLE_OWNER = 'USER2';

no rows selected


Note:
------
If there is a local or global prefixed index created on the partitioned table, import with PARTITION_OPTIONS=merge also converts the index to non-partitioned.

- local prefixed index:
CREATE INDEX part_tab_loc_idx ON part_tab(year) LOCAL;

After import with REMAP_SCHEMA=scott:user2 PARTITION_OPTIONS=merge, the local prefixed index is also converted to a non-partitioned index:

select OWNER, INDEX_NAME, PARTITIONED
from dba_indexes
where index_name='PART_TAB_GLOB_IDX';

OWNER INDEX_NAME PAR
---------- -------------------- ---
SCOTT PART_TAB_LOC_IDX YES
USER2 PART_TAB_LOC_IDX NO  


  -or-

- global prefixed index:  global index 但是分区
CREATE INDEX part_tab_glob_idx ON part_tab(year)
GLOBAL PARTITION BY RANGE (year)
(partition p1 values less than (1992),
partition p2 values less than (1993),
partition p3 values less than (1994),
partition p4 values less than (1995),
partition p5 values less than (MAXVALUE)
);

After import with REMAP_SCHEMA=scott:user2 PARTITION_OPTIONS=merge, the local prefixed index is also converted to a non-partitioned index:

select OWNER, INDEX_NAME, PARTITIONED
from dba_indexes
where index_name='PART_TAB_GLOB_IDX';

OWNER INDEX_NAME PAR
---------- -------------------- ---
SCOTT PART_TAB_GLOB_IDX YES
USER2 PART_TAB_GLOB_IDX NO

-----------------------------

Describes the tablespace for objects created using PARTITION_OPTIONS=MERGE with IMPDP.
 

SOLUTION

Using PARTITION_OPTIONS=MERGE, all partitions and subpartitions are merged into a single table.
The tables and indexes will be created using the default tablespace of the import target user.

example:

# Object configuration of source DB
SQL> select username, default_tablespace from dba_users where username = 'TEST';

USERNAME             DEFAULT_TABLESPACE
-------------------- --------------------
TEST                 USERS

SQL> select segment_name, partition_name, segment_type, tablespace_name from dba_segments where owner = 'TEST' order by 1,2;

SEGMENT_NAME         PARTITION_NAME       SEGMENT_TYPE         TABLESPACE_NAME
-------------------- -------------------- -------------------- --------------------
T1                   P1                   TABLE PARTITION      TESTTS
T1                   P2                   TABLE PARTITION      TESTTS
T1_I_L               P1                   INDEX PARTITION      TESTTS
T1_I_L               P2                   INDEX PARTITION      TESTTS

# expdp command
expdp test/test directory=tmp_dir dumpfile=part.dmp

# impdp command
impdp test/test directory=tmp_dir dumpfile=part.dmp partition_options=merge

# Object configuration of target DB
SQL> select username, default_tablespace from dba_users where username = 'TEST';

USERNAME             DEFAULT_TABLESPACE
-------------------- --------------------
TEST                 USERS

SQL> select segment_name, partition_name, segment_type, tablespace_name from dba_segments where owner = 'TEST' order by 1,2;

SEGMENT_NAME         PARTITION_NAME       SEGMENT_TYPE         TABLESPACE_NAME
-------------------- -------------------- -------------------- --------------------
T1                                        TABLE                USERS
T1_I_L                                    INDEX                USERS

If you want the tablespace use with objects to be other than the default tablespace for user, use one of the following options:

a)
a-1) Pre-create objects (tables and indexes) in a non-partitioned configuration
a-2) Run impdp

or

b)
b-1) Run impdp
b-2) Change the tablespace with "alter table ... move" and "alter index ... rebuild"

----------------------merge的坑

SYMPTOMS

If table gets loaded when target table pre-exists and PARTITION_OPTIONS=MERGE and TABLE_EXISTS_ACTION=SKIP is specified in par file or command line, duplicate rows are created.
Although IMPDP honors the skip request, the table data is imported again when PARTITION_OPTIONS=MERGE is specified.

Simple test case scenario :

+++ Create simple table, insert some data and export the table.
+++ Import the table using TABLE_EXISTS_ACTION=SKIP and PARTITION_OPTIONS=MERGE = One row is loaded
+++ Import the table again using TABLE_EXISTS_ACTION=SKIP and PARTITION_OPTIONS=MERGE = The rows is loaded again
+++ Duplicate rows are created if we import again from the same set of data.

Job IMPORT_1
============
impdp user/pwd@myinstance parfile=myfile.par

job_name=IMPORT_1
logfile=mylogfile.log
dumpfile=mydump.DMP
directory=data_pump_dir
TABLE_EXISTS_ACTION=SKIP       <<<<<<<<<<<<<<<<<<<<
PARTITION_OPTIONS=MERGE        <<<<<<<<<<<<<<<<<<<<
schemas=SCHEMA
include=TABLE:"='mytable'"

Import: Release 18.0.0.0.0 - Production on Tue Apr 2 17:38:21 2019
Version 18.3.0.0.0

Copyright (c) 1982, 2018, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Master table "USER"."IMPORT_1" successfully loaded/unloaded
Starting "USER"."IMPORT_1":  USER/********@myinstance parfile=myfile.par
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "SCHEMA"."mytable"                          5.062 KB       1 rows                  <<<<<<<<<<<<<<<<< 1 row uploaded
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Job "USER"."IMPORT_1" successfully completed at Tue Apr 2 14:39:09 2019 elapsed 0 00:00:33


SQL> select * from SCHEMA.mytable;

V1
----------
RECORD 01          <<<<<<<<<<<<<<<<< one row
 

Job IMPORT_2 duplicates the row - the issue does reproduce
==========================================================
impdp user/pwd@myinstance parfile=myfile.par

job_name=IMPORT_2
logfile=mylogfile.log
dumpfile=mydump.DMP
directory=data_pump_dir
TABLE_EXISTS_ACTION=SKIP   <<<<<<<<<<<<<<<<<<<<
PARTITION_OPTIONS=MERGE    <<<<<<<<<<<<<<<<<<<<
schemas=SCHEMA
include=TABLE:"='mytable'"

Import: Release 18.0.0.0.0 - Production on Tue Apr 2 17:41:07 2019
Version 18.3.0.0.0

Copyright (c) 1982, 2018, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Master table "USER"."IMPORT_2" successfully loaded/unloaded
Starting "USER"."IMPORT_2":  USER/********@myinstance parfile=myfile.par
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Table "SCHEMA"."mytable" exists. All dependent metadata and data will be skipped due to table_exists_action of skip  <<<<<<<<
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "SCHEMA"."mytable"                          5.062 KB       1 rows               <<<<<<<<<<<<<<<< again 1 row uploaded
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Job "USER"."IMPORT_2" successfully completed at Tue Apr 2 14:41:55 2019 elapsed 0 00:00:34


SQL>  select * from SCHEMA.mytable;

V1
----------
RECORD 01                 <<<<<<<<<<<<<<<<<<<<<<< duplicate record is inserted
RECORD 01   


Job IMPORT_3 doesn't introduce the problem if PARTITION_OPTIONS=MERGE is skipped:
=======================================================================================
impdp user/pwd@myinstance parfile=myfile.par

job_name=IMPORT_3
logfile=mylogfile.log
dumpfile=mydump.DMP
directory=data_pump_dir
TABLE_EXISTS_ACTION=SKIP                                <<<<<<<<<<
schemas=SCHEMA
include=TABLE:"='mytable'"

Import: Release 18.0.0.0.0 - Production on Tue Apr 2 17:55:54 2019
Version 18.3.0.0.0

Copyright (c) 1982, 2018, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Master table "USER"."IMPORT_3" successfully loaded/unloaded
Starting "USER"."IMPORT_3":  USER/********@myinstance parfile=myfile.par
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Table "SCHEMA"."mytable" exists. All dependent metadata and data will be skipped due to table_exists_action of skip
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Job "USER"."IMPORT_3" successfully completed at Tue Apr 2 14:56:39 2019 elapsed 0 00:00:31


Only two rows reported from previous IMPORT_2 job
Job IMPORT_3 did not upload any row
===================================================
SQL> select * from SCHEMA.mytable;

V1
----------
RECORD 01       <<<<<<<<
RECORD 01       <<<<<<<<

 
Job IMPORT_4 doesn't introduce the problem if using PARTITION_OPTIONS=MERGE and TABLE_EXISTS_ACTION=TRUNCATE:
=============================================================================================================
impdp user/pwd@myinstance parfile=myfile.par

job_name=IMPORT_4
logfile=mylogfile.log
dumpfile=mydump.DMP
directory=data_pump_dir
TABLE_EXISTS_ACTION=TRUNCATE       <<<<<<<<<<<
PARTITION_OPTIONS=MERGE            <<<<<<<<<<<
schemas=SCHEMA
include=TABLE:"='mytable'"

Import: Release 18.0.0.0.0 - Production on Tue Apr 2 18:29:08 2019
Version 18.3.0.0.0

Copyright (c) 1982, 2018, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Master table "USER"."IMPORT_4" successfully loaded/unloaded
Starting "USER"."IMPORT_4":  USER/********@myinstance parfile=myfile.par
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Table "SCHEMA"."mytable" exists and has been truncated. Data will be loaded but all dependent metadata will be skipped due to table_exists_action of truncate
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "SCHEMA"."mytable"                          5.062 KB       1 rows          <<<<<<<<<<<
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Job "USER"."IMPORT_4" successfully completed at Tue Apr 2 15:30:01 2019 elapsed 0 00:00:38


Only two rows reported from previous IMPORT_2 job
Job IMPORT_4 did not upload any row
===================================================
SQL> select * from SCHEMA.mytable;

V1
----------
RECORD 01          <<<<<<<<<
RECORD 01          <<<<<<<<<
 

CHANGES

 N/a

CAUSE

This is due to unpublished BUG 27495407 - DP IMPORT LOADS INTO PRE-EXISTING TABLE WITH PARTITION_OPTIONS=MERGE.

SOLUTION

To solve this issue, use any of alternatives below:

1) Make sure the target table does not pre-exist before the import.

- or -

2) Remove PARTITION_OPTIONS=MERGE parameter.

- or -

3) Change table_exists_action to something other than SKIP(default). For example use TABLE_EXISTS_ACTION=TRUNCATE if feasible.
 
- or - 

4) Apply one off < Patch 27495407> if available for your platform and version.

- or -

5) Upgrade to 20.1 where the fix for unpublished Bug 27495407 is included.

3 impdp 后statistics被锁定了-------------------

DataPump Import without data (CONTENT=METADATA_ONLY) locks the statistics.

Executing the DBMS_STATS.GATHER_TABLE_STATS to collect the statistics on the table imported, it fails with:
 

ORA-20005: object statistics are locked (stattype = ALL)

CAUSE

This is an expected behavior since 10.2.

----有点道理,既然要导入statistics,就要避免被再次统计,因为数据没有导入!!!

The statistics are locked during a DataPump Import if export or import were performed with CONTENT = METADATA_ONLY. This is because automatic statistics gathering is enabled by default in 10g. Therefore, the imported statistics, if not locked, are lost the next time the auto-stats job runs.
 

SOLUTION

To avoid the ORA-20005:

1. Unlock the table statistics after the import:
 

execute DBMS_STATS.UNLOCK_TABLE_STATS ('<user name>', '<table name>');


- OR -

2. Do not import the table statistics (add EXCLUDE=TABLE_STATISTICS to impdp parameters)

If the table is a queue table, then the statistics should remain empty and locked so that dynamic sampling is used due to the volatility of queue tables. If the table is not a queue table, unlock the statistics using
following:

DBMS_STATS.UNLOCK_[SCHEMA|TABLE]_STATS

Or gather statistics on the table using following:

DBMS_STATS.GATHER_[SCHEMA|TABLE|INDEX]_STATS and the force=>true parameter

To prevent import (imp) from locking the table's statistics when importing a table without the rows (rows=n), use statistics=none. To prevent data pump import (impdp) from locking the table's statistics when importing a table without the rows (content=metadata_only), use exclude=(table_statistics,index_statistics).

Examples

EXECUTE DBMS_STATS.LOCK_TABLE_STATS ('owner name', 'table name');
EXECUTE DBMS_STATS.LOCK_SCHEMA_STATS ('owner name');

SELECT stattype_locked FROM dba_tab_statistics;

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值