hive mysql环境搭建_Hive——环境搭建

Hive——环境搭建

相关hadoop和mysql环境已经搭建好。我博客中也有相关搭建的博客。

一、下载Hive并解压到指定目录(本次使用版本hive-1.1.0-cdh5.7.0,下载地址:http://archive.cloudera.com/cdh5/cdh/5/)

tar zxvf ./hive-1.1.0-cdh5.7.0.tar.gz -C ~/app/

二、Hive配置:参考官网:https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-InstallationandConfiguration

1、配置环境变量

1)vi .bash_profile

export HIVE_HOME=/home/hadoop/app/hive-1.1.0-cdh5.7.0

export PATH=$HIVE_HOME/bin:$PATH

2)source .bash_profile

source .bash_profile

2、hive-1.1.0-cdh5.7.0/conf/hive-env.sh

1)cp hive-env.sh.template hive-env.sh

cp hive-env.sh.template hive-env.sh

2)vi hive-env.sh 添加HADOOP_HOME

HADOOP_HOME=/home/hadoop/app/hadoop-2.6.0-cdh5.7.0

3、hive-1.1.0-cdh5.7.0/conf/hive-site.xml(自己创建配置)

(mysql驱动包需要自己手动拷贝到hive-1.1.0-cdh5.7.0/lib中)。

javax.jdo.option.ConnectionURL

jdbc:mysql://localhost:3306/rdb_hive?createDatabaseIfNotExist=true

javax.jdo.option.ConnectionDriverName

com.mysql.jdbc.Driver

javax.jdo.option.ConnectionUserName

root

javax.jdo.option.ConnectionPassword

root

三、启动hive

hive-1.1.0-cdh5.7.0/bin/hive

启动日志:

[hadoop@hadoop01 bin]$ ./hive

which: no hbase in (/home/hadoop/app/hive-1.1.0-cdh5.7.0/bin:/home/hadoop/app/hadoop-2.6.0-cdh5.7.0

/bin:/home/hadoop/app/jdk1.8.0_131/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/hadoop/

.local/bin:/home/hadoop/bin)

Logging initialized using configuration in jar:file:/home/hadoop/app/hive-1.1.0-cdh5.7.0/lib/

hive-common-1.1.0-cdh5.7.0.jar!/hive-log4j.properties

WARNING: Hive CLI is deprecated and migration to Beeline is recommended.

hive>

启动后会自动在mysql库上建立数据库和表:

mysql> show tables;

+--------------------+

| Tables_in_rdb_hive |

+--------------------+

| CDS |

| DATABASE_PARAMS |

| DBS |

| FUNCS |

| FUNC_RU |

| GLOBAL_PRIVS |

| PARTITIONS |

| PART_COL_STATS |

| ROLES |

| SDS |

| SEQUENCE_TABLE |

| SERDES |

| SKEWED_STRING_LIST |

| TAB_COL_STATS |

| TBLS |

| VERSION |

+--------------------+

四、hive简单入门

使用hive实现wordcount。

1、创建表:create table hive_wordcount(context string);

hive> create table hive_wordcount(context string);

OK

Time taken: 1.203 seconds

hive> show tables;

OK

hive_wordcount

Time taken: 0.19 seconds, Fetched: 1 row(s)

2、导入数据:load data local inpath '/home/hadoop/data/hello.txt' into table hive_wordcount;

hive> load data local inpath '/home/hadoop/data/hello.txt' into table hive_wordcount;

Loading data to table default.hive_wordcount

Table default.hive_wordcount stats: [numFiles=1, totalSize=44]

OK

Time taken: 2.294 seconds

3、查询表数据看是否导成功:select * from hive_wordcount;

hello.txt内容:

Deer Bear River

Car Car River

Deer Car Bear

hive> select * from hive_wordcount;

OK

Deer Bear River

Car Car River

Deer Car Bear

Time taken: 0.588 seconds, Fetched: 3 row(s)

4、使用sql实现wordcount:select word,count(1) from hive_wordcount lateral view explode(split(context,' ')) wc as word group by word;

hive> select word,count(1) from hive_wordcount lateral view explode(split(context,' ')) wc as

word group by word;

Query ID = hadoop_20180904070404_b23d8c2e-161b-4e65-a2cc-206ce343d9e8

Total jobs = 1

Launching Job 1 out of 1

Number of reduce tasks not specified. Estimated from input data size: 1

In order to change the average load for a reducer (in bytes):

set hive.exec.reducers.bytes.per.reducer=In order to limit the maximum number of reducers:

set hive.exec.reducers.max=In order to set a constant number of reducers:

set mapreduce.job.reduces=Starting Job = job_1536010835653_0002,

Kill Command = /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/bin/hadoop job -kill job_1536010835653_0002

Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1

2018-09-04 07:05:49,279 Stage-1 map = 0%, reduce = 0%

2018-09-04 07:06:01,893 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.95 sec

2018-09-04 07:06:10,804 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 3.44 sec

MapReduce Total cumulative CPU time: 3 seconds 440 msec

Ended Job = job_1536010835653_0002

MapReduce Jobs Launched:

Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 3.44 sec HDFS Read: 8797 HDFS

Write: 28 SUCCESS

Total MapReduce CPU Time Spent: 3 seconds 440 msec

OK

Bear 2

Car 3

Deer 2

River 2

Time taken: 37.441 seconds, Fetched: 4 row(s)

可以看到结果:

Bear 2

Car 3

Deer 2

River 2

注意:在创建表的时候遇到一个错误:

Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:

For direct MetaStore DB connections, we don't support retries at the client level.)

从字面意思是是连接msql有问题。从网上查询大概有两种解决办法:

1、换mysql jdbc驱动包,比如换成  mysql-connector-java-5.1.34-bin.jar,但我试过,我这里没有解决

2、换对应mysq 上MetaStore 数据库的编码,换成 latin1,亲测,解决。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值