1、show databases;
查看都有哪些数据库
hive> show databases;
OK
default
Time taken: 0.141 seconds, Fetched: 2 row(s)
hive>
2、create database park;
创建park数据库
hive> create database park;
OK
Time taken: 0.376 seconds
hive> show databases;
OK
default
park
Time taken: 0.027 seconds, Fetched: 2 row(s)
hive>
3、use park;
进入park数据库
hive> use park;
OK
Time taken: 0.057 seconds
hive>
4、show tables;
查看当前数据库下所有表
hive> show tables;
OK
Time taken: 0.046 seconds
hive>
5、create table stu(id int,name string);
创建stu表,以及相关的两个字段
hive> create table stu(id int,name string);
OK
Time taken: 0.482 seconds
hive> show tables;
OK
stu
Time taken: 0.055 seconds, Fetched: 1 row(s)
hive>
6、insert into stu values(1,‘zhang’);
向stu表插入数据
hive> insert into stu values(1,'zhang');
Query ID = root_20200528154450_bee5a3e2-dc10-4016-9d78-28f1bfea2406
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1590626315637_0002, Tracking URL = http://node1:8088/proxy/application_1590626315637_0002/
Kill Command = /usr/local/src/hadoop-2.9.2/bin/hadoop job -kill job_1590626315637_0002
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2020-05-28 15:45:48,732 Stage-1 map = 0%, reduce = 0%
2020-05-28 15:46:13,543 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3