hive 本地模式 远程模式操作


1、安装MYSQL服务
方法一:在centos7.x中,使用以下命令----------(1)yum install mysql mysql-server mysql-devel -y,后安装了mariadb-devel-5.5.65、mariadb-5.5.65、mariadb-libs-5.5.65、但是没有安装mariadb-server,需使用命令:
(2)yum install mariadb-server -y
(3)systemctl start mariadb
(4)systemctl enable mariadb
(5)mysql
(6)use mysql;
(7)update user set password=password('000000') where user='root';
(8)grant all previleges on *.* to 'root'@'%' identified by '000000' with grant option;
(9)flush privileges;
(10)转到7
方法二:
yum install -y mariadb mariadb-server
按照以下顺序
2、systemctl start mariadb//启动数据库服务
3、systemctl enable mariadb//开机自启数据库服务
4、mysql_secure_installation//初始化数据库,密码设置为6个0,以下为拷贝的初始化过程,在两个#之间
#
NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
      SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current
password for the root user.  If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none):     #按enter键
OK, successfully used password, moving on...

Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.

Set root password? [Y/n] Y    #是否设置密码
New password: 
Re-enter new password: 
Password updated successfully!
Reloading privilege tables..
 ... Success!


By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them.  This is intended only for testing, and to make the installation
go a bit smoother.  You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] Y    #删除匿名用户
 ... Success!

Normally, root should only be allowed to connect from 'localhost'.  This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] n    #允许root远程登陆
 ... skipping.

By default, MariaDB comes with a database named 'test' that anyone can
access.  This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] Y    #删除测试数据库
 - Dropping test database...
 ... Success!
 - Removing privileges on test database...
 ... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] y    #重新加载权限列表
 ... Success!

Cleaning up...

All done!  If you've completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!
#

5、进入数据库,配置文件是:/etc/my.cnf
mysql -uroot -p000000
6、授权在任何客户端机器上可以以root用户登录到数据库
grant all privileges on *.* to root@'%' identified by "000000";
7、hive的配置
cd /export/servers/apache-hive-1.2.1-bin/conf/
cp hive-env.sh.template hive-env.sh
HADOOP_HOME=/export/servers/hadoop    //添加hadoop环境变量
8、修改hive-site.xml,修改内容如下
命令:vi hive-site.xml    

<configuration>
        <property>
                <name>javax.jdo.option.ConnectionURL</name>
                <value>jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true</value>
                <description>Mysql Connection protocol</description>
      </property>
        <property>
                <name>javax.jdo.option.ConnectionDriverName</name>
                <value>com.mysql.jdbc.Driver</value>
        </property>
        <property>
                <name>javax.jdo.option.ConnectionUserName</name>
                <value>root</value>
        </property>
        <property>
                <name>javax.jdo.option.ConnectionPassword</name>
                <value>000000</value>
        </property>
</configuration>

9、下载mysql的连接驱动jar包,上传到linux下hive目录下的lib下,下载地址:https://downloads.mysql.com/archives/c-j/
10、分发,将hadoop01服务器安装的Hive分别复制到hadoop02和hadoop03
scp -r /export/servers/apache-hive-1.2.1-bin/ hadoop02:/export/servers/
scp -r /export/servers/apache-hive-1.2.1-bin/ hadoop03:/export/servers/
11、在hadoop01上启动hiveserver2服务,此时hadoop01无法执行任何操作
bin/hiveserver2
12、复制hadoop01的会话窗口,通过jps查询,多了个RunJar服务进程
13、在hadoop02下远程连接,命令如下:
bin/beeline
输入连接协议:!connect jdbc:hive2://hadoop01:10000    //输入hadoop01的用户名和密码,即可连接到hive服务
可进行测试:show databases;    //会有default数据仓库
14、Hive数据库操作
(1)创建数据库
create database if not exists itcast;
运行效果:
0: jdbc:hive2://hadoop01:10000> create database if not exists itcast;
No rows affected (0.252 seconds)
(2)显示数据库
0: jdbc:hive2://hadoop01:10000> show databases;
+----------------+--+
| database_name  |
+----------------+--+
| default        |
| itcast         |
+----------------+--+
2 rows selected (0.042 seconds)
(3)查看数据库
0: jdbc:hive2://hadoop01:10000> desc database itcast;
+----------+----------+-------------------------------------------+-------------+-------------+-------------+--+
| db_name  | comment  |                 location                  | owner_name  | owner_type  | parameters  |
+----------+----------+-------------------------------------------+-------------+-------------+-------------+--+
| itcast   |          | hdfs://ns1/user/hive/warehouse/itcast.db  | root        | USER        |             |
+----------+----------+-------------------------------------------+-------------+-------------+-------------+--+
1 row selected (0.053 seconds)
(4)切换数据库
0: jdbc:hive2://hadoop01:10000> use itcast;
No rows affected (0.052 seconds)
(5)修改数据库
(6)删除数据库

15、Hive内部表操作
(1)在hadoop01上的/export/data目录下创建hivedata目录
mkdir hivedata

(2)在hivedata目录下创建user.txt文件,内容如下:
1,allen,18
2,tom,23
3,jerry,28
(3)在hadoop02下,创建一个内部表t_user-------也可以通过location来指定存放在hdfs的位置,但是在
0: jdbc:hive2://hadoop01:10000> create table t_user(id int,name string,age int) row format delimited fields terminated by ',';
No rows affected (0.255 seconds)
0: jdbc:hive2://hadoop01:10000> show tables;
+-----------+--+
| tab_name  |
+-----------+--+
| t_user    |
+-----------+--+
1 row selected (0.068 seconds)
(4)进入UI界面(hadoop01:50070)查看,可以看到t_user在hdfs的以下目录:/user/hive/warehouse/itcast.db/t_user
(5)在hadoop01,将刚创建的user.txt文件上传到/user/hive/warehouse/itcast.db/t_user,此时该表就有数据了
[root@hadoop01 hivedata]# hadoop fs -put user.txt /user/hive/warehouse/itcast.db/t_user
(6)在hadoop02查看表t_user的数据
0: jdbc:hive2://hadoop01:10000> select * from t_user;
+------------+--------------+-------------+--+
| t_user.id  | t_user.name  | t_user.age  |
+------------+--------------+-------------+--+
| 1          | allen        | 18          |
| 2          | tom          | 23          |
| 3          | jerry        | 28          |
+------------+--------------+-------------+--+
3 rows selected (0.909 seconds)
(7)复杂类型数据建表
在hadoop01的/export/data/hivedata目录下创建文件student.txt,内容如下:
1,zhangsan,唱歌:非常喜欢-跳舞:喜欢-游泳:一般般
2,lisi,打游戏:非常喜欢-篮球:不喜欢
(8)在hadoop02,创建表t_student
0: jdbc:hive2://hadoop01:10000> create table t_student(id int,name string,hobby map<string,string>) row format delimited fields terminated by ',' collection items terminated by '-' map keys terminated by ':';
No rows affected (0.191 seconds)
(9)在hadoop01上,上传文件student.txt
hadoop fs -put student.txt /user/hive/warehouse/itcast.db/t_student
(10)在hadoop02上查询相关数据
0: jdbc:hive2://hadoop01:10000> select * from t_student;
+---------------+-----------------+-------------------------------------+--+
| t_student.id  | t_student.name  |           t_student.hobby           |
+---------------+-----------------+-------------------------------------+--+
| 1             | zhangsan        | {"唱歌":"非常喜欢","跳舞":"喜欢","游泳":"一般般"}  |
| 2             | lisi            | {"打游戏":"非常喜欢","篮球":"不喜欢"}           |
+---------------+-----------------+-------------------------------------+--+
2 rows selected (0.17 seconds)

16、Hive外部表操作
(1)在hadoop01的目录/export/data/hivedata创建stu.txt,内容如下:
95001,李勇,男,20,CS
95002,刘晨,女,19,IS
95003,王敏,女,22,MA
95004,张立,男,19,IS
95005,刘刚,男,18,MA
95006,孙庆,男,23,CS
95007,易思玲,女,19,MA
95008,李娜,女,18,CS
95009,梦圆圆,女,18,MA
95010,孔小涛,男,19,CS
95011,包小柏,男,18,MA
95012,孙花,女,20,CS
95013,冯伟,男,21,CS
95014,王小丽,女,19,CS
95015,王君,男,18,MA
95016,钱国,男,21,MA
95017,王风娟,女,18,IS
95018,王一,女,19,IS
95019,邢小丽,女,19,IS
95020,赵钱,男,21,IS
95021,周二,男,17,MA
95022,郑明,男,20,MA
(2)在hdfs上创建目录/stu
hadoop fs -mkdir /stu
(3)将文件stu.txt上传到hdfs的/stu目录下
hadoop fs -put stu.txt /stu
(4)在hadoop02上创建外部表stu_ext
0: jdbc:hive2://hadoop01:10000> create external table stu_ext(sno int,sname string,sex string,sage int,sdept string) row format
0: jdbc:hive2://hadoop01:10000> delimited fields terminated by ',' location '/stu';
(5)查看数据
0: jdbc:hive2://hadoop01:10000> select * from stu_ext;
+--------------+----------------+--------------+---------------+----------------+--+
| stu_ext.sno  | stu_ext.sname  | stu_ext.sex  | stu_ext.sage  | stu_ext.sdept  |
+--------------+----------------+--------------+---------------+----------------+--+
| 95001        | 李勇             | 男            | 20            | CS             |
| 95002        | 刘晨             | 女            | 19            | IS             |
| 95003        | 王敏             | 女            | 22            | MA             |
| 95004        | 张立             | 男            | 19            | IS             |
| 95005        | 刘刚             | 男            | 18            | MA             |
| 95006        | 孙庆             | 男            | 23            | CS             |
| 95007        | 易思玲            | 女            | 19            | MA             |
| 95008        | 李娜             | 女            | 18            | CS             |
| 95009        | 梦圆圆            | 女            | 18            | MA             |
| 95010        | 孔小涛            | 男            | 19            | CS             |
| 95011        | 包小柏            | 男            | 18            | MA             |
| 95012        | 孙花             | 女            | 20            | CS             |
| 95013        | 冯伟             | 男            | 21            | CS             |
| 95014        | 王小丽            | 女            | 19            | CS             |
| 95015        | 王君             | 男            | 18            | MA             |
| 95016        | 钱国             | 男            | 21            | MA             |
| 95017        | 王风娟            | 女            | 18            | IS             |
| 95018        | 王一             | 女            | 19            | IS             |
| 95019        | 邢小丽            | 女            | 19            | IS             |
| 95020        | 赵钱             | 男            | 21            | IS             |
| 95021        | 周二             | 男            | 17            | MA             |
| 95022        | 郑明             | 男            | 20            | MA             |
+--------------+----------------+--------------+---------------+----------------+--+
22 rows selected (0.185 seconds)

17、Hive分区表操作
Hive普通分区
(1)在hadoop01创建文件user_p.txt
1,allen
2,tom
3,jerry
(2)在hadoop02,创建分区表,(单分区)
0: jdbc:hive2://hadoop01:10000> create table t_user_p(id int,name string) partitioned by(country string) row format delimited fields terminated by ',';
(3)加载数据
0: jdbc:hive2://hadoop01:10000> load data local inpath '/export/data/hivedata/user_p.txt' into table t_user_p partition(country='USA');
INFO  : Loading data to table itcast.t_user_p partition (country=USA) from file:/export/data/hivedata/user_p.txt
INFO  : Partition itcast.t_user_p{country=USA} stats: [numFiles=1, numRows=0, totalSize=22, rawDataSize=0]
No rows affected (1.042 seconds)
(4)查询数据
0: jdbc:hive2://hadoop01:10000> select * from t_user_p;
+--------------+----------------+-------------------+--+
| t_user_p.id  | t_user_p.name  | t_user_p.country  |
+--------------+----------------+-------------------+--+
| 1            | allen          | USA               |
| 2            | tom            | USA               |
| 3            | jerry          | USA               |
+--------------+----------------+-------------------+--+
3 rows selected (0.233 seconds)
(5)增加分区
0: jdbc:hive2://hadoop01:10000> alter table t_user_p add partition (country='China') location '/user/hive/warehouse/itcast.db/t_user_p/country=China';
No rows affected (0.253 seconds)
(6)修改分区
0: jdbc:hive2://hadoop01:10000> alter table t_user_p partition (country='China') rename to partition(country='Japan');
No rows affected (0.259 seconds)
(7)删除分区
0: jdbc:hive2://hadoop01:10000> alter table t_user_p drop if exists partition (country='Japan');
INFO  : Dropped the partition country=Japan
No rows affected (0.47 seconds)

Hive动态分区
hadoop01操作
(1)开启动态分区功能
set hive.exec.dynamic.partition=true;
(2)允许所有的分区字段都可以使用动态分区
set hive.exec.dynamic.partition.mode=nonstrict;
(3)在hadoop01的目录/export/data/hivedata,创建文件dynamic_partition_table.txt,内容如下:
2020-05-14,ip1
2020-05-14,ip2
2020-06-15,ip3
2020-06-15,ip4
2020-06-16,ip1
2020-06-16,ip2
(4)创建原始表
0: jdbc:hive2://hadoop01:10000> create table dynamic_partition_table(day string,ip string) row format delimited fields terminated by ',';
No rows affected (0.098 seconds)
(5)加载数据
0: jdbc:hive2://hadoop01:10000> load data local inpath '/export/data/hivedata/dynamic_partition_table.txt' into table dynamic_partition_table;
INFO  : Loading data to table itcast.dynamic_partition_table from file:/export/data/hivedata/dynamic_partition_table.txt
INFO  : Table itcast.dynamic_partition_table stats: [numFiles=1, totalSize=90]
No rows affected (0.269 seconds)
(6)创建目标表
0: jdbc:hive2://hadoop01:10000> create table d_p_t(ip string) partitioned by (month string,day string);
No rows affected (0.055 seconds)
(7)动态插入
0: jdbc:hive2://hadoop01:10000> insert overwrite table d_p_t partition(month,day)
0: jdbc:hive2://hadoop01:10000> select ip,substr(day,1,7) as month,day from dynamic_partition_table;
INFO  : Number of reduce tasks is set to 0 since there's no reduce operator
INFO  : number of splits:1
INFO  : Submitting tokens for job: job_1586698718988_0001
INFO  : The url to track the job: http://hadoop01:8088/proxy/application_1586698718988_0001/
INFO  : Starting Job = job_1586698718988_0001, Tracking URL = http://hadoop01:8088/proxy/application_1586698718988_0001/
INFO  : Kill Command = /export/servers/hadoop/bin/hadoop job  -kill job_1586698718988_0001
INFO  : Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
INFO  : 2020-04-25 18:01:04,957 Stage-1 map = 0%,  reduce = 0%
INFO  : 2020-04-25 18:01:19,728 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.18 sec
INFO  : MapReduce Total cumulative CPU time: 2 seconds 180 msec
INFO  : Ended Job = job_1586698718988_0001
INFO  : Stage-4 is selected by condition resolver.
INFO  : Stage-3 is filtered out by condition resolver.
INFO  : Stage-5 is filtered out by condition resolver.
INFO  : Moving data to: hdfs://ns1/user/hive/warehouse/itcast.db/d_p_t/.hive-staging_hive_2020-04-25_18-00-45_980_2947159928176839072-1/-ext-10000 from hdfs://ns1/user/hive/warehouse/itcast.db/d_p_t/.hive-staging_hive_2020-04-25_18-00-45_980_2947159928176839072-1/-ext-10002
INFO  : Loading data to table itcast.d_p_t partition (month=null, day=null) from hdfs://ns1/user/hive/warehouse/itcast.db/d_p_t/.hive-staging_hive_2020-04-25_18-00-45_980_2947159928176839072-1/-ext-10000
INFO  :      Time taken for load dynamic partitions : 659
INFO  :     Loading partition {month=2020-04, day=2020-04-10}
INFO  :     Loading partition {month=2020-04, day=2020-04-20}
INFO  :     Loading partition {month=2020-05, day=2020-05-05}
INFO  :     Loading partition {month=2020-05, day=2020-05-01}
INFO  :     Loading partition {month=2020-05, day=2020-05-10}
INFO  :     Loading partition {month=2020-04, day=2020-04-30}
INFO  :      Time taken for adding to write entity : 2
INFO  : Partition itcast.d_p_t{month=2020-04, day=2020-04-10} stats: [numFiles=1, numRows=1, totalSize=4, rawDataSize=3]
INFO  : Partition itcast.d_p_t{month=2020-04, day=2020-04-20} stats: [numFiles=1, numRows=1, totalSize=4, rawDataSize=3]
INFO  : Partition itcast.d_p_t{month=2020-04, day=2020-04-30} stats: [numFiles=1, numRows=1, totalSize=4, rawDataSize=3]
INFO  : Partition itcast.d_p_t{month=2020-05, day=2020-05-01} stats: [numFiles=1, numRows=1, totalSize=4, rawDataSize=3]
INFO  : Partition itcast.d_p_t{month=2020-05, day=2020-05-05} stats: [numFiles=1, numRows=1, totalSize=4, rawDataSize=3]
INFO  : Partition itcast.d_p_t{month=2020-05, day=2020-05-10} stats: [numFiles=1, numRows=1, totalSize=4, rawDataSize=3]
No rows affected (36.376 seconds)

(8)查看表的分区数据
0: jdbc:hive2://hadoop01:10000> show partitions d_p_t;
+-------------------------------+--+
|           partition           |
+-------------------------------+--+
| month=2020-04/day=2020-04-10  |
| month=2020-04/day=2020-04-20  |
| month=2020-04/day=2020-04-30  |
| month=2020-05/day=2020-05-01  |
| month=2020-05/day=2020-05-05  |
| month=2020-05/day=2020-05-10  |
+-------------------------------+--+
6 rows selected (0.082 seconds)
0: jdbc:hive2://hadoop01:10000> 

18、Hive桶表操作
(1)开启分桶功能
set hive.enforce.bucketing=true;
(2)设置分桶数与ReduceTask数保持一致
set mapreduce.job.reduces=4;
(3)创建桶表
0: jdbc:hive2://hadoop01:10000> create table stu_buck(sno int,sname string,sex string,sage int,sdept string)
0: jdbc:hive2://hadoop01:10000> clustered by(sno) into 4 buckets row format delimited fields terminated by ',';
No rows affected (0.192 seconds)
(4)创建临时表
0: jdbc:hive2://hadoop01:10000> create table stu_tmp(sno int,sname string,sex string,sage int,sdept string)
0: jdbc:hive2://hadoop01:10000> row format delimited fields terminated by ',';
No rows affected (0.091 seconds)
(5)加载数据
0: jdbc:hive2://hadoop01:10000> load data local inpath '/export/data/hivedata/stu.txt' into table stu_tmp;
INFO  : Loading data to table itcast.stu_tmp from file:/export/data/hivedata/stu.txt
INFO  : Table itcast.stu_tmp stats: [numFiles=1, totalSize=527]
No rows affected (0.377 seconds)
(6)将数据导入stu_buck
0: jdbc:hive2://hadoop01:10000> insert overwrite table stu_buck select * from stu_tmp cluster by(sno);
(7)查看分桶情况
0: jdbc:hive2://hadoop01:10000> select * from stu_buck;
+---------------+-----------------+---------------+----------------+-----------------+--+
| stu_buck.sno  | stu_buck.sname  | stu_buck.sex  | stu_buck.sage  | stu_buck.sdept  |
+---------------+-----------------+---------------+----------------+-----------------+--+
| 95004         | 张立              | 男             | 19             | IS              |
| 95008         | 李娜              | 女             | 18             | CS              |
| 95012         | 孙花              | 女             | 20             | CS              |
| 95016         | 钱国              | 男             | 21             | MA              |
| 95020         | 赵钱              | 男             | 21             | IS              |
| 95001         | 李勇              | 男             | 20             | CS              |
| 95005         | 刘刚              | 男             | 18             | MA              |
| 95009         | 梦圆圆             | 女             | 18             | MA              |
| 95013         | 冯伟              | 男             | 21             | CS              |
| 95017         | 王风娟             | 女             | 18             | IS              |
| 95021         | 周二              | 男             | 17             | MA              |
| 95002         | 刘晨              | 女             | 19             | IS              |
| 95006         | 孙庆              | 男             | 23             | CS              |
| 95010         | 孔小涛             | 男             | 19             | CS              |
| 95014         | 王小丽             | 女             | 19             | CS              |
| 95018         | 王一              | 女             | 19             | IS              |
| 95022         | 郑明              | 男             | 20             | MA              |
| 95003         | 王敏              | 女             | 22             | MA              |
| 95007         | 易思玲             | 女             | 19             | MA              |
| 95011         | 包小柏             | 男             | 18             | MA              |
| 95015         | 王君              | 男             | 18             | MA              |
| 95019         | 邢小丽             | 女             | 19             | IS              |
+---------------+-----------------+---------------+----------------+-----------------+--+
22 rows selected (0.131 seconds)
(8)在hadoop01上,使用hadoop命令查看数据
hadoop fs -cat /user/hive/warehouse/itcast.db/stu_buck/000000_0

19、Hive数据操作
1、文件emp.txt内容如下:
7369    SMITH    CLERK    7902    1980-12-17    800.00        29
7499    ALLEN    SALESMAN    7698    1981-2-20    1600.00    300.00    30
7521    WARD    SALESMAN    7698    1981-2-22    1250.00    500.00    30
7566    JONES    MANAGER    7839    1981-4-2    2975.00        20
7654    MARTIN    SALESMAN    7698    1981-9-28    1250.00    1400.00    30
7698    BLAKE    MANAGER    7839    1981-5-1    2850.00        30
7782    CLARK    MANAGER    7839    1981-6-9    2450.00        10
7788    SCOTT    ANALYST    7566    1987-4-19    3000.00        20
7839    KING    PRESIDENT        1981-11-17    5000.00        10
7844    TURNER    SALESMAN    7698    1981-9-8    1500.00    0.00    10
7876    ADAMS    CLERK    7788    1987-5-23    1100.00        20
7900    JAMES    CLERK    7698    1981-12-3    950.00        30
7902    FORD    ANALYST    7566    1981-12-3    3000.00        20
7934    MILLER    CLERK    7782    1982-1-23    1300.00        10
2、文件dept.txt内容如下
10    ACCOUNTING    1700
20    RESEARCH    1800
30    SALES    1900
40    OPERATIONS    1700
3、创建表
0: jdbc:hive2://hadoop01:10000> create table emp(empno int,ename string,job string,mgr int,hiredate string,sal double,comm double,deptno int) row format delimited fields terminated by '\t';
No rows affected (0.164 seconds)
0: jdbc:hive2://hadoop01:10000> create table dept(deptno int,dname string,loc int) row format delimited fields terminated by '\t';
No rows affected (0.057 seconds)
4、在hadoop01上上传文件到对应的hdfs上
[root@hadoop01 hivedata]# hadoop fs -put emp.txt /user/hive/warehouse/emp
[root@hadoop01 hivedata]# hadoop fs -put dept.txt /user/hive/warehouse/dept
5、基本查询
select * from emp;
select deptno,dname from dept;
select count(*) cnt from emp;
select sum(sal) sum_sal from emp;
select * from emp limit 5;
6、Where条件查询

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

诺特兰德

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值