java hive增删改查_2.hive里的增删改查

查询数据库

hive> show databases;

OK

default

Time taken: 0.254 seconds, Fetched: 1 row(s)

hive>

#defalut是默认数据库

创建数据库

hive> create database liuyao;

OK

Time taken: 0.055 seconds

在hdfs里查看相关库数据

#hdfs dfs -lsr /

lsr: DEPRECATED: Please use 'ls -R' instead.

drwxr-xr-x - root supergroup 0 2017-07-12 10:45 /user/hive

drwxr-xr-x - root supergroup 0 2017-07-12 10:45 /user/hive/warehouse

drwxr-xr-x - root supergroup 0 2017-07-12 10:45 /user/hive/warehouse/liuyao.db

#每创建一个数据库都会在对应的hdfs里的/user/hive/warehouse生成一个xx.db的目录

#一个数据库对应一个目录

在mysql里查看相关库数据

mysql> use dbhive;

mysql> select * from DBS;

+-------+-----------------------+------------------------------------------------+---------+------------+------------+

| DB_ID | DESC | DB_LOCATION_URI | NAME | OWNER_NAME | OWNER_TYPE |

+-------+-----------------------+------------------------------------------------+---------+------------+------------+

| 1 | Default Hive database | hdfs://hadoop-1/user/hive/warehouse | default | public | ROLE |

| 2 | NULL | hdfs://hadoop-1/user/hive/warehouse/liuyao.db | liuyao | root | USER |

+-------+-----------------------+------------------------------------------------+---------+------------+------------+

#创建的库相关信息都维护在mysql里

创建表

hive> use liuyao;

OK

Time taken: 0.057 seconds

hive> create table test(id int, name string);

OK

Time taken: 0.458 seconds

hive>

在mysql里查看相关表数据

mysql> select * from TBLS;

+--------+-------------+-------+------------------+-------+-----------+-------+----------+---------------+--------------------+--------------------+

| TBL_ID | CREATE_TIME | DB_ID | LAST_ACCESS_TIME | OWNER | RETENTION | SD_ID | TBL_NAME | TBL_TYPE | VIEW_EXPANDED_TEXT | VIEW_ORIGINAL_TEXT |

+--------+-------------+-------+------------------+-------+-----------+-------+----------+---------------+--------------------+--------------------+

| 1 | 1499871180 | 2 | 0 | root | 0 | 1 | test | MANAGED_TABLE | NULL | NULL |

+--------+-------------+-------+------------------+-------+-----------+-------+----------+---------------+--------------------+--------------------+

1 row in set (0.00 sec)

#DB_ID关联管理库ID

在hdfs里产生相关表数据

#hdfs dfs -lsr /

drwxr-xr-x - root supergroup 0 2017-07-12 10:53 /user/hive/warehouse/liuyao.db/test

插入数据

hive> use liuyao;

OK

Time taken: 0.133 seconds

hive> insert into test values(2,"tom");

WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.

Query ID = root_20170712111559_38d90f7b-845d-441c-9fc2-77ceef15c446

Total jobs = 3

Launching Job 1 out of 3

Number of reduce tasks is set to 0 since there's no reduce operator

Starting Job = job_1498146166646_0003, Tracking URL = http://hadoop-1:8088/proxy/application_1498146166646_0003/

Kill Command = /soft/hadoop-2.7.3/bin/hadoop job -kill job_1498146166646_0003

Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0

2017-07-12 11:16:08,543 Stage-1 map = 0%, reduce = 0%

2017-07-12 11:16:14,960 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.65 sec

MapReduce Total cumulative CPU time: 1 seconds 650 msec

Ended Job = job_1498146166646_0003

Stage-4 is selected by condition resolver.

Stage-3 is filtered out by condition resolver.

Stage-5 is filtered out by condition resolver.

Moving data to directory hdfs://hadoop-1/user/hive/warehouse/liuyao.db/test/.hive-staging_hive_2017-07-12_11-15-59_225_3827478452355860952-1/-ext-10000

Loading data to table liuyao.test

MapReduce Jobs Launched:

Stage-Stage-1: Map: 1 Cumulative CPU: 1.65 sec HDFS Read: 4060 HDFS Write: 73 SUCCESS

Total MapReduce CPU Time Spent: 1 seconds 650 msec

OK

Time taken: 17.353 seconds

hive>

insert会产生mr作业

14998725308175.png

查看数据

hive>

hive> select * from test;

OK

1tom

2tom

Time taken: 0.276 seconds, Fetched: 2 row(s)

hive>

#select * from test 默认从hdfs里的相关文件里读取数据

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值