MapReduce功能实现

版权声明:本文为博主原创文章,请尊重劳动成果,觉得不错就在文章下方顶一下呗,转载请标明原地址。 https://blog.csdn.net/m0_37739193/article/details/76053636

MapReduce功能实现系列:

MapReduce功能实现一---Hbase和Hdfs之间数据相互转换

MapReduce功能实现二---排序

MapReduce功能实现三---Top N

MapReduce功能实现四---小综合(从hbase中读取数据统计并在hdfs中降序输出Top 3)

MapReduce功能实现五---去重(Distinct)、计数(Count)

MapReduce功能实现六---最大值(Max)、求和(Sum)、平均值(Avg)

MapReduce功能实现七---小综合(多个job串行处理计算平均值)

MapReduce功能实现八---分区(Partition)

MapReduce功能实现九---Pv、Uv

MapReduce功能实现十---倒排索引(Inverted Index)

MapReduce功能实现十一---join


一、从Hbase表1中读取数据再把统计结果存到表2

在Hbase中建立相应的表1:


 
 
  1. create 'hello', 'cf'
  2. put 'hello', '1', 'cf:hui', 'hello world'
  3. put 'hello', '2', 'cf:hui', 'hello hadoop'
  4. put 'hello', '3', 'cf:hui', 'hello hive'
  5. put 'hello', '4', 'cf:hui', 'hello hadoop'
  6. put 'hello', '5', 'cf:hui', 'hello world'
  7. put 'hello', '6', 'cf:hui', 'hello world'

java代码:

 
 
  1. import java.io.IOException;
  2. import java.util.Iterator;
  3. import org.apache.hadoop.conf.Configuration;
  4. import org.apache.hadoop.hbase.HBaseConfiguration;
  5. import org.apache.hadoop.hbase.HColumnDescriptor;
  6. import org.apache.hadoop.hbase.HTableDescriptor;
  7. import org.apache.hadoop.hbase.client.HBaseAdmin;
  8. import org.apache.hadoop.hbase.client.Put;
  9. import org.apache.hadoop.hbase.client.Result;
  10. import org.apache.hadoop.hbase.client.Scan;
  11. import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
  12. import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
  13. import org.apache.hadoop.hbase.mapreduce.TableMapper;
  14. import org.apache.hadoop.hbase.mapreduce.TableReducer;
  15. import org.apache.hadoop.hbase.util.Bytes;
  16. import org.apache.hadoop.io.IntWritable;
  17. import org.apache.hadoop.io.NullWritable;
  18. import org.apache.hadoop.io.Text;
  19. import org.apache.hadoop.mapreduce.Job;
  20. public class HBaseToHbase {
  21. public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
  22. String hbaseTableName1 = "hello";
  23. String hbaseTableName2 = "mytb2";
  24. prepareTB2(hbaseTableName2);
  25. Configuration conf = new Configuration();
  26. Job job = Job.getInstance(conf);
  27. job.setJarByClass(HBaseToHbase.class);
  28. job.setJobName( "mrreadwritehbase");
  29. Scan scan = new Scan();
  30. scan.setCaching( 500);
  31. scan.setCacheBlocks( false);
  32. TableMapReduceUtil.initTableMapperJob(hbaseTableName1, scan, doMapper.class, Text.class, IntWritable.class, job);
  33. TableMapReduceUtil.initTableReducerJob(hbaseTableName2, doReducer.class, job);
  34. System.exit(job.waitForCompletion( true) ? 1 : 0);
  35. }
  36. public static class doMapper extends TableMapper<Text, IntWritable>{
  37. private final static IntWritable one = new IntWritable( 1);
  38. @Override
  39. protected void map(ImmutableBytesWritable key, Result value, Context context) throws IOException, InterruptedException {
  40. String rowValue = Bytes.toString(value.list().get( 0).getValue());
  41. context.write( new Text(rowValue), one);
  42. }
  43. }
  44. public static class doReducer extends TableReducer<Text, IntWritable, NullWritable>{
  45. @Override
  46. protected void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
  47. System.out.println(key.toString());
  48. int sum = 0;
  49. Iterator<IntWritable> haha = values.iterator();
  50. while (haha.hasNext()) {
  51. sum += haha.next().get();
  52. }
  53. Put put = new Put(Bytes.toBytes(key.toString()));
  54. put.add(Bytes.toBytes( "mycolumnfamily"), Bytes.toBytes( "count"), Bytes.toBytes(String.valueOf(sum)));
  55. context.write(NullWritable.get(), put);
  56. }
  57. }
  58. public static void prepareTB2(String hbaseTableName) throws IOException{
  59. HTableDescriptor tableDesc = new HTableDescriptor(hbaseTableName);
  60. HColumnDescriptor columnDesc = new HColumnDescriptor( "mycolumnfamily");
  61. tableDesc.addFamily(columnDesc);
  62. Configuration cfg = HBaseConfiguration.create();
  63. HBaseAdmin admin = new HBaseAdmin(cfg);
  64. if (admin.tableExists(hbaseTableName)) {
  65. System.out.println( "Table exists,trying drop and create!");
  66. admin.disableTable(hbaseTableName);
  67. admin.deleteTable(hbaseTableName);
  68. admin.createTable(tableDesc);
  69. } else {
  70. System.out.println( "create table: "+ hbaseTableName);
  71. admin.createTable(tableDesc);
  72. }
  73. }
  74. }

在Linux中执行该代码:

 
 
  1. [hadoop@h71 q1]$ /usr/jdk1. 7.0_25/bin/javac HBaseToHbase.java
  2. [hadoop@h71 q1]$ /usr/jdk1. 7.0_25/bin/jar cvf xx.jar HBaseToHbase* class
  3. [hadoop@h71 q1]$ hadoop jar xx.jar HBaseToHbase

查看mytb2表:

 
 
  1. hbase(main): 009: 0> scan 'mytb2'
  2. ROW COLUMN+CELL
  3. hello hadoop column=mycolumnfamily:count, timestamp= 1489817182454, value= 2
  4. hello hive column=mycolumnfamily:count, timestamp= 1489817182454, value= 1
  5. hello world column=mycolumnfamily:count, timestamp= 1489817182454, value= 3
  6. 3 row(s) in 0.0260 seconds

二、从Hbase表1中读取数据再把结果存Hdfs中

1.将表1的内容不统计输出:


 
 
  1. import java.io.IOException;
  2. import org.apache.hadoop.conf.Configuration;
  3. import org.apache.hadoop.fs.Path;
  4. import org.apache.hadoop.hbase.HBaseConfiguration;
  5. import org.apache.hadoop.hbase.client.Result;
  6. import org.apache.hadoop.hbase.client.Scan;
  7. import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
  8. import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
  9. import org.apache.hadoop.hbase.mapreduce.TableMapper;
  10. import org.apache.hadoop.hbase.util.Bytes;
  11. import org.apache.hadoop.io.NullWritable;
  12. import org.apache.hadoop.io.Text;
  13. import org.apache.hadoop.io.Writable;
  14. import org.apache.hadoop.io.WritableComparable;
  15. import org.apache.hadoop.mapreduce.Job;
  16. import org.apache.hadoop.mapreduce.Reducer;
  17. import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
  18. import org.apache.hadoop.mapreduce.lib.output.MultipleOutputs;
  19. import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
  20. public class HbaseToHdfs {
  21. public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
  22. String tablename = "hello";
  23. Configuration conf = HBaseConfiguration.create();
  24. conf.set( "hbase.zookeeper.quorum", "h71");
  25. Job job = new Job(conf, "WordCountHbaseReader");
  26. job.setJarByClass(HbaseToHdfs.class);
  27. Scan scan = new Scan();
  28. TableMapReduceUtil.initTableMapperJob(tablename,scan,doMapper.class, Text.class, Text.class, job);
  29. job.setReducerClass(WordCountHbaseReaderReduce.class);
  30. FileOutputFormat.setOutputPath(job, new Path(args[ 0]));
  31. MultipleOutputs.addNamedOutput(job, "hdfs", TextOutputFormat.class, WritableComparable.class, Writable.class);
  32. System.exit(job.waitForCompletion( true) ? 0 : 1);
  33. }
  34. public static class doMapper extends TableMapper<Text, Text>{
  35. @Override
  36. protected void map(ImmutableBytesWritable key, Result value, Context context) throws IOException, InterruptedException {
  37. String rowValue = Bytes.toString(value.list().get( 0).getValue());
  38. context.write( new Text(rowValue), new Text( "one"));
  39. }
  40. }
  41. public static class WordCountHbaseReaderReduce extends Reducer<Text,Text,Text,NullWritable>{
  42. private Text result = new Text();
  43. @Override
  44. protected void reduce(Text key, Iterable<Text> values,Context context) throws IOException, InterruptedException {
  45. for(Text val:values){
  46. result.set(val);
  47. context.write(key, NullWritable.get());
  48. }
  49. }
  50. }
  51. }

在Linux中执行该代码:
[hadoop@h71 q1]$ /usr/jdk1.7.0_25/bin/javac HbaseToHdfs.java
[hadoop@h71 q1]$ /usr/jdk1.7.0_25/bin/jar cvf xx.jar HbaseToHdfs*class
[hadoop@h71 q1]$ hadoop jar xx.jar HbaseToHdfs /output
注意:/output目录不能存在,如果存在就删除掉


[hadoop@h71 q1]$ hadoop fs -ls /output
Found 2 items
-rw-r--r--   2 hadoop supergroup          0 2017-03-18 14:28 /output/_SUCCESS
-rw-r--r--   2 hadoop supergroup         73 2017-03-18 14:28 /output/part-r-00000
[hadoop@h71 q1]$ hadoop fs -cat /output/part-r-00000
hello hadoop
hello hadoop
hello hive
hello world
hello world
hello world


2.将表1的内容统计输出:


 
 
  1. import java.io.IOException;
  2. import org.apache.hadoop.conf.Configuration;
  3. import org.apache.hadoop.fs.Path;
  4. import org.apache.hadoop.hbase.HBaseConfiguration;
  5. import org.apache.hadoop.hbase.client.Result;
  6. import org.apache.hadoop.hbase.client.Scan;
  7. import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
  8. import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
  9. import org.apache.hadoop.hbase.mapreduce.TableMapper;
  10. import org.apache.hadoop.hbase.util.Bytes;
  11. import org.apache.hadoop.io.IntWritable;
  12. import org.apache.hadoop.io.Text;
  13. import org.apache.hadoop.io.Writable;
  14. import org.apache.hadoop.io.WritableComparable;
  15. import org.apache.hadoop.mapreduce.Job;
  16. import org.apache.hadoop.mapreduce.Reducer;
  17. import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
  18. import org.apache.hadoop.mapreduce.lib.output.MultipleOutputs;
  19. import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
  20. public class HbaseToHdfs1 {
  21. public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
  22. String tablename = "hello";
  23. Configuration conf = HBaseConfiguration.create();
  24. conf.set( "hbase.zookeeper.quorum", "h71");
  25. Job job = new Job(conf, "WordCountHbaseReader");
  26. job.setJarByClass(HbaseToHdfs1.class);
  27. Scan scan = new Scan();
  28. TableMapReduceUtil.initTableMapperJob(tablename,scan,doMapper.class, Text.class, IntWritable.class, job);
  29. job.setReducerClass(WordCountHbaseReaderReduce.class);
  30. FileOutputFormat.setOutputPath(job, new Path(args[ 0]));
  31. MultipleOutputs.addNamedOutput(job, "hdfs", TextOutputFormat.class, WritableComparable.class, Writable.class);
  32. System.exit(job.waitForCompletion( true) ? 0 : 1);
  33. }
  34. public static class doMapper extends TableMapper<Text, IntWritable>{
  35. private final static IntWritable one = new IntWritable( 1);
  36. private Text word = new Text();
  37. @Override
  38. protected void map(ImmutableBytesWritable key, Result value, Context context) throws IOException, InterruptedException {
  39. /*
  40. String rowValue = Bytes.toString(value.list().get(0).getValue());
  41. context.write(new Text(rowValue), one);
  42. */
  43. String[] rowValue = Bytes.toString(value.list().get( 0).getValue()).split( " ");
  44. for (String str: rowValue){
  45. word.set(str);
  46. context.write(word,one);
  47. }
  48. }
  49. }
  50. public static class WordCountHbaseReaderReduce extends Reducer<Text,IntWritable,Text,IntWritable>{
  51. @Override
  52. public void reduce(Text key,Iterable<IntWritable> values,Context context) throws IOException, InterruptedException {
  53. int total= 0;
  54. for(IntWritable val:values){
  55. total++;
  56. }
  57. context.write(key, new IntWritable(total));
  58. }
  59. }
  60. }

[hadoop@h71 q1]$ hadoop fs -cat /output/part-r-00000
hadoop  2
hello   6
hive    1
world   3


三、读取Hdfs文件将统计结果存入到Hbase表中

创建文件并上传到Hdfs中:


 
 
  1. [hadoop@h71 q1]$ vi hello.txt
  2. hello world
  3. hello hadoop
  4. hello hive
  5. hello hadoop
  6. hello world
  7. hello world
  8. [hadoop@h71 q1]$ hadoop fs -mkdir /input
  9. [hadoop@h71 q1]$ hadoop fs -put hello.txt /input

java代码:

 
 
  1. import java.io.IOException;
  2. import org.apache.hadoop.conf.Configuration;
  3. import org.apache.hadoop.fs.Path;
  4. import org.apache.hadoop.hbase.HBaseConfiguration;
  5. import org.apache.hadoop.hbase.HColumnDescriptor;
  6. import org.apache.hadoop.hbase.HTableDescriptor;
  7. import org.apache.hadoop.hbase.client.HBaseAdmin;
  8. import org.apache.hadoop.hbase.client.Put;
  9. import org.apache.hadoop.hbase.mapreduce.TableOutputFormat;
  10. import org.apache.hadoop.hbase.mapreduce.TableReducer;
  11. import org.apache.hadoop.hbase.util.Bytes;
  12. import org.apache.hadoop.io.IntWritable;
  13. import org.apache.hadoop.io.LongWritable;
  14. import org.apache.hadoop.io.NullWritable;
  15. import org.apache.hadoop.io.Text;
  16. import org.apache.hadoop.mapreduce.Job;
  17. import org.apache.hadoop.mapreduce.Mapper;
  18. import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
  19. import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
  20. public class HdfsToHBase {
  21. public static class Map extends Mapper<LongWritable, Text, Text, IntWritable> {
  22. private IntWritable i = new IntWritable( 1);
  23. @Override
  24. public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
  25. String s[] = value.toString().trim().split( "/n");
  26. for (String m : s) {
  27. context.write( new Text(m), i);
  28. }
  29. }
  30. }
  31. public static class Reduce extends TableReducer<Text, IntWritable, NullWritable> {
  32. @Override
  33. public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
  34. int sum = 0;
  35. for (IntWritable i : values) {
  36. sum += i.get();
  37. }
  38. Put put = new Put(Bytes.toBytes(key.toString()));
  39. // 列族为cf,列为count,列值为数目
  40. put.add(Bytes.toBytes( "cf"), Bytes.toBytes( "count"), Bytes.toBytes(String.valueOf(sum)));
  41. context.write(NullWritable.get(), put);
  42. }
  43. }
  44. public static void createHBaseTable(String tableName) throws IOException {
  45. HTableDescriptor htd = new HTableDescriptor(tableName);
  46. HColumnDescriptor col = new HColumnDescriptor( "cf");
  47. htd.addFamily(col);
  48. Configuration conf = HBaseConfiguration.create();
  49. conf.set( "hbase.zookeeper.quorum", "h71");
  50. HBaseAdmin admin = new HBaseAdmin(conf);
  51. if (admin.tableExists(tableName)) {
  52. System.out.println( "table exists, trying to recreate table......");
  53. admin.disableTable(tableName);
  54. admin.deleteTable(tableName);
  55. }
  56. System.out.println( "create new table:" + tableName);
  57. admin.createTable(htd);
  58. }
  59. public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException {
  60. //将结果存入hbase的表名
  61. String tableName = "mytb2";
  62. Configuration conf = new Configuration();
  63. conf.set(TableOutputFormat.OUTPUT_TABLE, tableName);
  64. createHBaseTable(tableName);
  65. String input = args[ 0];
  66. Job job = new Job(conf, "WordCount table with " + input);
  67. job.setJarByClass(HdfsToHBase.class);
  68. job.setNumReduceTasks( 3);
  69. job.setMapperClass(Map.class);
  70. job.setReducerClass(Reduce.class);
  71. job.setMapOutputKeyClass(Text.class);
  72. job.setMapOutputValueClass(IntWritable.class);
  73. job.setInputFormatClass(TextInputFormat.class);
  74. job.setOutputFormatClass(TableOutputFormat.class);
  75. FileInputFormat.addInputPath(job, new Path(input));
  76. // FileInputFormat.setInputPaths(job, new Path(input)); //这种方法也可以
  77. System.exit(job.waitForCompletion( true) ? 0 : 1);
  78. }
  79. }

[hadoop@h71 q1]$ /usr/jdk1.7.0_25/bin/javac HdfsToHBase.java
[hadoop@h71 q1]$ /usr/jdk1.7.0_25/bin/jar cvf xx.jar HdfsToHBase*class
[hadoop@h71 q1]$ hadoop jar xx.jar HdfsToHBase /input/hello.txt



 
 
  1. hbase(main): 011: 0> scan 'mytb2'
  2. ROW COLUMN+CELL
  3. hello hadoop column=cf:count, timestamp= 1489819702236, value= 2
  4. hello hive column=cf:count, timestamp= 1489819702236, value= 1
  5. hello world column=cf:count, timestamp= 1489819704448, value= 3
  6. 3 row(s) in 0.3260 seconds

四、从Hdfs到Hdfs(其实就是mapreduce的经典例子wordcount)

java代码:


 
 
  1. import java.io.IOException;
  2. import org.apache.hadoop.conf.Configuration;
  3. import org.apache.hadoop.fs.Path;
  4. import org.apache.hadoop.io.IntWritable;
  5. import org.apache.hadoop.io.Text;
  6. import org.apache.hadoop.mapreduce.Job;
  7. import org.apache.hadoop.mapreduce.Mapper;
  8. import org.apache.hadoop.mapreduce.Reducer;
  9. import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
  10. import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
  11. public class HdfsToHdfs{
  12. public static class WordCountMapper extends Mapper<Object,Text,Text,IntWritable>{
  13. private final static IntWritable one = new IntWritable( 1);
  14. private Text word = new Text();
  15. @Override
  16. public void map(Object key,Text value,Context context) throws IOException, InterruptedException {
  17. String[] words = value.toString().split( " ");
  18. for (String str: words){
  19. word.set(str);
  20. context.write(word,one);
  21. }
  22. }
  23. }
  24. public static class WordCountReducer extends Reducer<Text,IntWritable,Text,IntWritable> {
  25. @Override
  26. public void reduce(Text key,Iterable<IntWritable> values,Context context) throws IOException, InterruptedException {
  27. int total= 0;
  28. for (IntWritable val : values){
  29. total++;
  30. }
  31. context.write(key, new IntWritable(total));
  32. }
  33. }
  34. public static void main (String[] args) throws Exception{
  35. Configuration conf = new Configuration();
  36. Job job = new Job(conf, "word count");
  37. job.setJarByClass(HdfsToHdfs.class);
  38. job.setMapperClass(WordCountMapper.class);
  39. job.setReducerClass(WordCountReducer.class);
  40. job.setOutputKeyClass(Text.class);
  41. job.setOutputValueClass(IntWritable.class);
  42. FileInputFormat.setInputPaths(job, new Path(args[ 0]));
  43. FileOutputFormat.setOutputPath(job, new Path(args[ 1]));
  44. System.exit(job.waitForCompletion( true) ? 0 : 1);
  45. }
  46. }

[hadoop@h71 q1]$ /usr/jdk1.7.0_25/bin/javac HdfsToHdfs.java
[hadoop@h71 q1]$ /usr/jdk1.7.0_25/bin/jar cvf xx.jar HdfsToHdfs*class
[hadoop@h71 q1]$ hadoop jar xx.jar HdfsToHdfs /input/hello.txt /output


[hadoop@h71 q1]$ hadoop fs -cat /output/part-r-00000
hadoop  2
hello   6
hive    1
world   3


说明:我这个wordcount例子是hadoop2版本的,我的另一篇文章http://blog.csdn.net/m0_37739193/article/details/71132652里的是hadoop1版本的例子,在hadoop0.20.0及以后同时包含了两个版本的的API,所以两个版本的代码都能运行

Hadoop MapReduce新旧API区别:
        Hadoop的版本0.20.0包含有一个新的java MapReduce API,有时也称为"上下文对象"(context object),旨在使API在今后更容易扩展。新的API 在类型上不兼容先前的API,所以,需要重写以前的应用程序才能使新的API发挥作用。
        新的API倾向于使用抽象类,而不是接口,因为这更容易扩展。例如,可以无需修改类的实现而在抽象类中添加一个方法(即用默认的实现)。在新的API中, mapper和reducer现在都是抽象类。

--接口,严格的“协议约束”,只有方法声明而没有方法实现,要求所有实现类(抽象类除外)必须实现接口中的每个方法。

--抽象类,较宽松的“约束协议”,可为某些方法提供默认实现,而继承类则可选择是否重新实现这些方法。故而抽象类在类衍化方面更有优势,即具有良好的向后兼容性。
        新的API放在org.apache.hadoop.mapreduce包(和子包)中。之前版本的API依旧放在org.apache.hadoop.mapred中。
        新的API充分使用上下文对象,使用户代码能与MapReduce系统通信。例如,MapContext 基本具备了JobConf、OutputCollector和Reporter的功能。
        新的API同时支持"推"(push)和"拉"(pull)式的迭代。这两类API,均可以将键/值对记录推给mapper,但除此之外,新的API也允许把记录从map()方法中拉出。对reducer来说是一样的。"拉"式处理数据的好处是可以实现数据的批量处理,而非逐条记录地处理。
        新增的API实现了配置的统一。旧API通过一个特殊的JobConf对象配置作业,该对象是Hadoop配置对象的一个扩展。在新的API中,我们丢弃这种区分,所有作业的配置均通过Configuration来完成。
        新API中作业控制由Job类实现,而非JobClient类,新API中删除了JobClient类。
        输出文件的命名方式稍有不同。map的输出文件名为part-m-nnnnn,而reduce的输出为part-r-nnnnn(其中nnnnn表示分块序号,为整数,且从0开始算。
        将旧API写的Mapper和Reducer类转换为新API时,记住将map()和reduce()的签名转换为新形式。如果只是将类的继承修改为对新的Mapper和Reducer类的继承,编译的时候也不会报错或显示警告信息,因为新的Mapper和Reducer类同样也提供了等价的map()和reduce()函数。但是,自己写的mapper或reducer代码是不会被调用的,这会导致难以诊断的错误。

  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值