第三章:《Spark之----Spark SQL》

Spark Core的核心就是对RDD进行一次又一次的转换
Spark SQL就是对DataFrame进行一次又一次的转换
RDD需要创建一个spark context对象
Spark SQL需要创建一个sc即spark session对象

一、从文件中生成DataFrame

1.导入包并生成相关对象

scala> import org.apache.spark.sql.SparkSession  //引入Spark Session包
import org.apache.spark.sql.SparkSession
 
scala> val spark=SparkSession.builder().getOrCreate()  //生成Spark Session对象名称为spark
spark: org.apache.spark.sql.SparkSession = org.apache.spark.sql.SparkSession@2bdab835
 
//使支持RDDs转换为DataFrames及后续sql操作
scala> import spark.implicits._
import spark.implicits._

以上三行代码是必须的

2.磁盘中有三类文件:.json文件 .parquet文件 .csv文件,读取文件加载为Dataframe
①读取json文件生成DataFrame

scala> val df = spark.read.json("file:///usr/local/spark/examples/src/main/resources/people.json")
df: org.apache.spark.sql.DataFrame = [age: bigint, name: string]
 
scala> df.show()
+----+-------+
| age|   name|
+----+-------+
|null|Michael|
|  30|   Andy|
|  19| Justin|
+----+-------+

scala>val peopleDF=spark.read.format("json").load("file:///usr/local/spark/examples/src/main/resources/people.json")

②读取parquet文件生成DataFrame

scala> val df = spark.read.parquet("file:///usr/local/spark/examples/src/main/resources/people.parquet")

scala>val peopleDF=spark.read.format("parquet").load("file:///usr/local/spark/examples/src/main/resources/people.parquet")

③读取csv文件生成DataFrame

scala> val df = spark.read.csv("file:///usr/local/spark/examples/src/main/resources/people.csv")

scala>val peopleDF=spark.read.format("csv").load("file:///usr/local/spark/examples/src/main/resources/people.csv")

3.保存DataFrame为.json文件 .parquet文件 .csv文件
①写入到json文件中

df.write.json("file:///usr/local/spark/examples/src/main/resources/people.json")

scala>df.write.format("json").save("file:///usr/local/spark/examples/src/main/resources/people.json")

②写入到parquet文件中

df.write.parquet("file:///usr/local/spark/examples/src/main/resources/people.parquet")

scala>df.write.format("parquet").save("file:///usr/local/spark/examples/src/main/resources/people.parquet")

③写入到csv文件中

df.write.csv("file:///usr/local/spark/examples/src/main/resources/people.csv")

scala>df.write.format("csv").save("file:///usr/local/spark/examples/src/main/resources/people.csv")

4.怎么把Dataframe保存为txt文件

scala> val peopleDF = spark.read.format("json").load("file:///usr/local/spark/examples/src/main/resources/people.json")
scala> peopleDF.select("name", "age").write.format("csv").save("file:///usr/local/spark/mycode/newpeople.csv")

write.format()支持输出 json,parquet, jdbc, orc, libsvm, csv, text等格式文件,如果要输出文本文件,可以采用write.format(“text”),但是,需要注意,只有select()中只存在一个列时,才允许保存成文本文件,如果存在两个列,比如select(“name”, “age”),就不能保存成文本文件。

在RDD编程中我们知道把RDD保存为txt的方法是:

scala> val textFile = sc.textFile("file:///usr/word.txt")  //读取txt文件
scala> textFile.saveAsTextFile("file:///usr/writeback")  //写入txt文件

那么在DataFrame格式的我们就不建议保存为txt文档了,但可以采用txt的println输出方法:

scala> val peopleDF = spark.read.format("json").load("file:///usr/local/spark/examples/src/main/resources/people.json")
scala> peopleDF.select("name", "age").write.format("csv").save("file:///usr/local/spark/mycode/newpeople.csv")

//对保存后的文件进行println输出
scala> val textFile = sc.textFile("file:///usr/local/spark/mycode/newpeople.csv")
scala> textFile.foreach(println)
Justin,19
Michael,
Andy,30
scala> val peopleDF = spark.read.format("json").load("file:///usr/local/spark/examples/src/main/resources/people.json")

//我们不建议把DataFrame保存为txt,但也不是不可以这么做
scala> df.rdd.saveAsTextFile("file:///usr/local/spark/mycode/newpeople.txt")  

//对保存后的文件进行读取输出
scala> val textFile = sc.textFile("file:///usr/local/spark/mycode/newpeople.txt")
scala> textFile.foreach(println)
[null,Michael]
[30,Andy]
[19,Justin]

5.如何从txt中加载DataFrame
dataframe类型为什么建议存储为csv、json、parquet类型而不建议存储为txt类型呢?

当dataframe存储为txt文档的时候它就不是dataframe类型了,要把txt加载为dataframe需要经过三步“txt→rdd→dataframe”(先要加载txt为rdd,然后从rdd转化为dataframe才行)

我们之前讲了dataframe如何转为txt,那么如何加载txt为dataframe呢?(因为txt要转换为dataframe必定途经过txt→rdd→dataframe,所以凡是rdd→dataframe的都可以参照如下操作)

见 {二、加载txt得到DataFrame}

6.常用的查找操作

scala> df.select(df("name"),df("age")+1).show()
scala> df.filter(df("age") > 20 ).show()
scala> df.groupBy("age").count().show()
scala> df.sort(df("age").desc).show()
scala> df.sort(df("age").desc, df("name").asc).show()
scala> df.select(df("name").as("username"),df("age")).show() //对列进行重命名
scala> val namesDF = spark.sql("SELECT * FROM parquetFile")
scala> namesDF.foreach(attributes =>println("Name: " + attributes(0)+"  favorite color:"+attributes(1)))

7.实例

scala> import org.apache.spark.sql.SparkSession  //引入Spark Session包
scala> val spark=SparkSession.builder().getOrCreate()  //生成Spark Session对象名称为spark
scala> import spark.implicits._
scala> val df = spark.read.json("file:///usr/local/spark/examples/src/main/resources/people.json")
scala> df.show()
+----+-------+
| age|   name|
+----+-------+
|null|Michael|
|  30|   Andy|
|  19| Justin|
+----+-------+
// 打印模式信息
scala> df.printSchema()
root
 |-- age: long (nullable = true)
 |-- name: string (nullable = true)
 
// 选择多列
scala> df.select(df("name"),df("age")+1).show()
+-------+---------+
|   name|(age + 1)|
+-------+---------+
|Michael|     null|
|   Andy|       31|
| Justin|       20|
+-------+---------+
 
// 条件过滤
scala> df.filter(df("age") > 20 ).show()
+---+----+
|age|name|
+---+----+
| 30|Andy|
+---+----+
 
// 分组聚合
scala> df.groupBy("age").count().show()
+----+-----+
| age|count|
+----+-----+
|  19|    1|
|null|    1|
|  30|    1|
+----+-----+
 
// 排序
scala> df.sort(df("age").desc).show()
+----+-------+
| age|   name|
+----+-------+
|  30|   Andy|
|  19| Justin|
|null|Michael|
+----+-------+
 
//多列排序
scala> df.sort(df("age").desc, df("name").asc).show()
+----+-------+
| age|   name|
+----+-------+
|  30|   Andy|
|  19| Justin|
|null|Michael|
+----+-------+
 
//对列进行重命名
scala> df.select(df("name").as("username"),df("age")).show()
+--------+----+
|username| age|
+--------+----+
| Michael|null|
|    Andy|  30|
|  Justin|  19|
+--------+----+
 


二、加载txt得到DataFrame

我们之前讲了dataframe如何转为txt,那么如何加载txt为dataframe呢?(因为txt要转换为dataframe必定途经过txt→rdd→dataframe,所以凡是rdd→dataframe的都可以参照如下操作)

重复三遍,凡是rdd→dataframe的都可以参照如下操作
重复三遍,凡是rdd→dataframe的都可以参照如下操作
重复三遍,凡是rdd→dataframe的都可以参照如下操作

在这里插入图片描述
一、当你提前知道txt里面包含name和age两个字段时—用提前定义一个case class的方式
1.导入包生成相关对象

scala> import org.apache.spark.sql.catalyst.encoders.ExpressionEncoder
import org.apache.spark.sql.catalyst.encoders.ExpressionEncoder
 
scala> import org.apache.spark.sql.Encoder
import org.apache.spark.sql.Encoder
 
scala> import spark.implicits._  //导入包,支持把一个RDD隐式转换为一个DataFrame
import spark.implicits._
 
scala> case class Person(name: String, age: Long)  //定义一个case class,里面包含name和age两个字段
defined class Person

2.执行代码

scala> val peopleDF = spark.sparkContext.textFile("file:///usr/local/spark/examples/src/main/resources/people.txt").map(_.split(",")).map(attributes => Person(attributes(0), attributes(1).trim.toInt)).toDF()
peopleDF: org.apache.spark.sql.DataFrame = [name: string, age: bigint]

对上面代码的解释:
在这里插入图片描述

在这里插入图片描述

3.进行查询

scala> peopleDF.createOrReplaceTempView("people")  //必须注册为临时表才能供下面的查询使用
 
scala> val personsRDD = spark.sql("select name,age from people where age > 20")
//最终生成一个DataFrame
personsRDD: org.apache.spark.sql.DataFrame = [name: string, age: bigint]
scala> personsRDD.map(t => "Name:"+t(0)+","+"Age:"+t(1)).show()  //DataFrame中的每个元素都是一行记录,包含name和age两个字段,分别用t(0)和t(1)来获取值
 
+------------------+
|             value|
+------------------+
|Name:Michael,Age:29|
|   Name:Andy,Age:30|
+------------------+

二、当你不知道txt里面包含什么字段时—使用编程方式定义RDD模式
当你不知道txt里面包含什么字段时,无法提前定义case class时,就需要采用编程方式定义RDD模式。
在这里插入图片描述

import org.apache.spark.sql.types._
import org.apache.spark.sql.Row


//加载文件生成 RDD
val peopleRDD = spark.sparkContext.textFile("file:///usr/local/spark/examples/src/main/resources/people.txt")

//制作一个含有"name"和"age"字段的表头,“StringType”就是说这一列是“String类型”,“nullable = true”就是允许这个字段存在空值(这个看你啊,要是这个字段是学生ID,那么你一般不会让ID这一列存在空置的)
val schema = StructType(
  StructField("name",StringType,nullable = true),
  StructField("age", StringType,nullable = true),
)

//制作表中记录,我们要把加载进来的RDD处理成类似于Row("Andy",30)这样式的
val rowRDD = peopleRDD.map(_.split(",")).map(attributes => Row(attributes(0), attributes(1).trim.toInt))  //Row是你导入的一个对象

//把“表头”和“表中记录”拼在一起
val peopleDF = spark.createDataFrame(rowRDD, schema)

//必须注册为临时表才能供下面查询使用
peopleDF.createOrReplaceTempView("people")
//进行查询
val results = spark.sql("SELECT name,age FROM people")

当然你也可以这样写:

scala> import org.apache.spark.sql.types._
import org.apache.spark.sql.types._
 
scala> import org.apache.spark.sql.Row
import org.apache.spark.sql.Row
 
//生成 RDD
scala> val peopleRDD = spark.sparkContext.textFile("file:///usr/local/spark/examples/src/main/resources/people.txt")
peopleRDD: org.apache.spark.rdd.RDD[String] = file:///usr/local/spark/examples/src/main/resources/people.txt MapPartitionsRDD[1] at textFile at <console>:26
 
 
scala> val schemaString = "name age"
schemaString: String = name age

scala> val fields = schemaString.split(" ").map(fieldName => StructField(fieldName, StringType, nullable = true))
fields: Array[org.apache.spark.sql.types.StructField] = Array(StructField(name,StringType,true), StructField(age,StringType,true))
 
scala> val schema = StructType(fields)
schema: org.apache.spark.sql.types.StructType = StructType(StructField(name,StringType,true), StructField(age,StringType,true))
//从上面信息可以看出,schema描述了模式信息,模式中包含name和age两个字段
 
 
scala> val rowRDD = peopleRDD.map(_.split(",")).map(attributes => Row(attributes(0), attributes(1).trim.toInt))
rowRDD: org.apache.spark.rdd.RDD[org.apache.spark.sql.Row] = MapPartitionsRDD[3] at map at <console>:29
 //这行内容经过people.map(_.split(“,”))操作后,就得到一个集合{Michael,29}。后面经过map(p => Row(p(0), p(1).trim))操作时,这时的p就是这个集合{Michael,29},这时p(0)就是Micheael,p(1)就是29,map(p => Row(p(0), p(1).trim))就会生成一个Row对象。    
 
scala> val peopleDF = spark.createDataFrame(rowRDD, schema)
peopleDF: org.apache.spark.sql.DataFrame = [name: string, age: string]
 
//必须注册为临时表才能供下面查询使用
scala> peopleDF.createOrReplaceTempView("people")
 
scala> val results = spark.sql("SELECT name,age FROM people")
results: org.apache.spark.sql.DataFrame = [name: string, age: string]
 
scala> results.map(attributes => "name: " + attributes(0)+","+"age:"+attributes(1)).show()
+--------------------+
|               value|
+--------------------+
|name: Michael,age:29|
|   name: Andy,age:30|
| name: Justin,age:19|
+--------------------+

有一个非常契合本知识点的实例:参照第二章:《RDD编程实例之 实验报告》的《 四、《实验报告四》》



以下内容需要你实践后再进行背诵

三、读取和保存数据

一、读写Parquet(DataFrame)
下面代码演示了如何从parquet文件中加载数据生成DataFrame。

scala> import spark.implicits._
import spark.implicits._
 
scala> val parquetFileDF = spark.read.parquet("file:///usr/local/spark/examples/src/main/resources/users.parquet")
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
parquetFileDF: org.apache.spark.sql.DataFrame = [name: string, favorite_color: string ... 1 more field]
 
scala> parquetFileDF.createOrReplaceTempView("parquetFile")
 
scala> val namesDF = spark.sql("SELECT * FROM parquetFile")
namesDF: org.apache.spark.sql.DataFrame = [name: string, favorite_color: string ... 1 more field]
 
scala> namesDF.foreach(attributes =>println("Name: " + attributes(0)+"  favorite color:"+attributes(1)))
16/12/02 10:18:49 WARN hadoop.ParquetRecordReader: Can not initialize counter due to context is not a instance of TaskInputOutputContext, but is org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
Name: Alyssa  favorite color:null
Name: Ben  favorite color:red
 

下面介绍如何将DataFrame保存成parquet文件。

scala> import spark.implicits._
import spark.implicits._
 
scala> val peopleDF = spark.read.json("file:///usr/local/spark/examples/src/main/resources/people.json")
peopleDF: org.apache.spark.sql.DataFrame = [age: bigint, name: string]
 
scala> peopleDF.write.parquet("file:///usr/local/spark/mycode/newpeople.parquet")
 

我们要再次把这个刚生成的数据又加载到DataFrame中,该怎么做呢?

scala> val users = spark.read.parquet("file:///usr/local/spark/myCode/people.parquet")
users: org.apache.spark.sql.DataFrame = [age: bigint, name: string]

二、通过JDBC连接数据库读写数据
①创建好我们所需要的MySQL数据库和表

mysql> create database spark;
mysql> use spark;
mysql> create table student (id int(4), name char(20), gender char(4), age int(4));
mysql> insert into student values(1,'Xueqian','F',23);
mysql> insert into student values(2,'Weiliang','M',24);
mysql> select * from student;

②编写Spark应用程序连接MySQL数据库并且读写数据。

service mysql start

cd /usr/local/spark

./bin/spark-shell --jars /usr/local/spark/jars/mysql-connector-java-5.1.40/mysql-connector-java-5.1.40-bin.jar --driver-class-path /usr/local/spark/jars/mysql-connector-java-5.1.40/mysql-connector-java-5.1.40-bin.jar

启动进入spark-shell以后,可以执行以下命令连接数据库,读取数据,并显示:

scala> val jdbcDF = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/spark").option("driver","com.mysql.jdbc.Driver").option("dbtable", "student").option("user", "root").option("password", "hadoop").load()
Fri Dec 02 11:56:56 CST 2016 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
jdbcDF: org.apache.spark.sql.DataFrame = [id: int, name: string ... 2 more fields]
 
scala> jdbcDF.show()
Fri Dec 02 11:57:30 CST 2016 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
+---+--------+------+---+
| id|    name|gender|age|
+---+--------+------+---+
|  1| Xueqian|     F| 23|
|  2|Weiliang|     M| 24|
+---+--------+------+---+
 

③下面我们再来看一下如何往MySQL中写入数据。

service mysql start //如果前面已经启动MySQL数据库,这里不用再执行这条命令
mysql -u root -p

执行上述命令后,屏幕上会提示你输入MySQL数据库密码,我们这里给数据库设置的用户是root,密码是hadoop。
因为之前我们已经在MySQL数据库中创建了一个名称为spark的数据库,并创建了一个名称为student的表,现在查看一下:

mysql>  use spark;
Database changed
 
mysql> select * from student;
//上面命令执行后返回下面结果
+------+----------+--------+------+
| id   | name     | gender | age  |
+------+----------+--------+------+
|    1 | Xueqian  | F      |   23 |
|    2 | Weiliang | M      |   24 |
+------+----------+--------+------+
2 rows in set (0.00 sec)
 

现在我们开始在spark-shell中编写程序,往spark.student表中插入两条记录。

//下面,我们要启动一个spark-shell,而且启动的时候,要附加一些参数。启动Spark Shell时,必须指定mysql连接驱动jar包(如果你前面已经采用下面方式启动了spark-shell,就不需要重复启动了):
cd /usr/local/spark

./bin/spark-shell \
--jars /usr/local/spark/jars/mysql-connector-java-5.1.40/mysql-connector-java-5.1.40-bin.jar \
--driver-class-path  /usr/local/spark/jars/mysql-connector-java-5.1.40/mysql-connector-java-5.1.40-bin.jar
//上面的命令行中,在一行的末尾加入斜杠\,是为了告诉spark-shell,命令还没有结束。

启动进入spark-shell以后,可以执行以下命令连接数据库,写入数据,程序如下(你可以把下面程序一条条拷贝到spark-shell中执行):

import java.util.Properties
import org.apache.spark.sql.types._
import org.apache.spark.sql.Row
 
//下面我们设置两条数据表示两个学生信息
val studentRDD = spark.sparkContext.parallelize(Array("3 Rongcheng M 26","4 Guanhua M 27")).map(_.split(" "))
 
//下面要设置模式信息
val schema = StructType(List(StructField("id", IntegerType, true),StructField("name", StringType, true),StructField("gender", StringType, true),StructField("age", IntegerType, true)))
 
//下面创建Row对象,每个Row对象都是rowRDD中的一行
val rowRDD = studentRDD.map(p => Row(p(0).toInt, p(1).trim, p(2).trim, p(3).toInt))
 
//建立起Row对象和模式之间的对应关系,也就是把数据和模式对应起来
val studentDF = spark.createDataFrame(rowRDD, schema)
 
//下面创建一个prop变量用来保存JDBC连接参数
val prop = new Properties()
prop.put("user", "root") //表示用户名是root
prop.put("password", "hadoop") //表示密码是hadoop
prop.put("driver","com.mysql.jdbc.Driver") //表示驱动程序是com.mysql.jdbc.Driver
 
//下面就可以连接数据库,采用append模式,表示追加记录到数据库spark的student表中
studentDF.write.mode("append").jdbc("jdbc:mysql://localhost:3306/spark", "spark.student", prop)
mysql> select * from student;
+------+-----------+--------+------+
| id   | name      | gender | age  |
+------+-----------+--------+------+
|    1 | Xueqian   | F      |   23 |
|    2 | Weiliang  | M      |   24 |
|    3 | Rongcheng | M      |   26 |
|    4 | Guanhua   | M      |   27 |
+------+-----------+--------+------+
4 rows in set (0.00 sec)

三、连接Hive读写数据
首先启动MySQL数据库:

service mysql start  #可以在Linux的任何目录下执行该命令

启动Hadoop:

cd /usr/local/hadoop
./sbin/start-all.sh

再启动Hive:

cd /usr/local/hive
./bin/hive  #由于已经配置了path环境变量,这里也可以直接使用hive,不加路径

下面我们进入Hive,新建一个数据库sparktest,并在这个数据库下面创建一个表student,并录入两条数据。

hive> create database if not exists sparktest;//创建数据库sparktest
hive> show databases; //显示一下是否创建出了sparktest数据库
//下面在数据库sparktest中创建一个表student
hive> create table if not exists sparktest.student(
> id int,
> name string,
> gender string,
> age int);
hive> use sparktest; //切换到sparktest
hive> show tables; //显示sparktest数据库下面有哪些表
hive> insert into student values(1,'Xueqian','F',23); //插入一条记录
hive> insert into student values(2,'Weiliang','M',24); //再插入一条记录
hive> select * from student; //显示student表中的记录

连接Hive读写数据
请在spark-shell中执行以下命令从Hive中读取数据:

Scala> import org.apache.spark.sql.Row
Scala> import org.apache.spark.sql.SparkSession
 
Scala> case class Record(key: Int, value: String)
 
// warehouseLocation points to the default location for managed databases and tables
Scala> val warehouseLocation = "spark-warehouse"
 
Scala> val spark = SparkSession.builder().appName("Spark Hive Example").config("spark.sql.warehouse.dir", warehouseLocation).enableHiveSupport().getOrCreate()
 
Scala> import spark.implicits._
Scala> import spark.sql
//下面是运行结果
scala> sql("SELECT * FROM sparktest.student").show()
+---+--------+------+---+
| id|    name|gender|age|
+---+--------+------+---+
|  1| Xueqian|     F| 23|
|  2|Weiliang|     M| 24|
+---+--------+------+---+

下面,我们切换到spark-shell编写程序向Hive数据库的sparktest.student表中插入两条数据:

scala> import java.util.Properties
scala> import org.apache.spark.sql.types._
scala> import org.apache.spark.sql.Row
 
//下面我们设置两条数据表示两个学生信息
scala> val studentRDD = spark.sparkContext.parallelize(Array("3 Rongcheng M 26","4 Guanhua M 27")).map(_.split(" "))
 
//下面要设置模式信息
scala> val schema = StructType(List(StructField("id", IntegerType, true),StructField("name", StringType, true),StructField("gender", StringType, true),StructField("age", IntegerType, true)))
 
//下面创建Row对象,每个Row对象都是rowRDD中的一行
scala> val rowRDD = studentRDD.map(p => Row(p(0).toInt, p(1).trim, p(2).trim, p(3).toInt))
 
//建立起Row对象和模式之间的对应关系,也就是把数据和模式对应起来
scala> val studentDF = spark.createDataFrame(rowRDD, schema)
 
//查看studentDF
scala> studentDF.show()
+---+---------+------+---+
| id|     name|gender|age|
+---+---------+------+---+
|  3|Rongcheng|     M| 26|
|  4|  Guanhua|     M| 27|
+---+---------+------+---+
//下面注册临时表
scala> studentDF.registerTempTable("tempTable")
 
scala> sql("insert into sparktest.student select * from tempTable")

然后,请切换到刚才的hive终端窗口,输入以下命令查看Hive数据库内容的变化:

插入之前是:
hive> select * from student;
OK
1   Xueqian F   23
2   Weiliang    M   24
Time taken: 0.05 seconds, Fetched: 2 row(s)

现在是:
hive> select * from student;
OK
1   Xueqian F   23
2   Weiliang    M   24
3   Rongcheng   M   26
4   Guanhua M   27
Time taken: 0.049 seconds, Fetched: 4 row(s)
  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

乘风破浪的牛马

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值