hive mysql windows_Hive安装-windows(转载)

1.安装hadoop

2.从maven中下载mysql-connector-java-5.1.26-bin.jar(或其他jar版本)放在hive目录下的lib文件夹

3.配置hive环境变量,HIVE_HOME=F:\hadoop\apache-hive-2.1.1-bin

4.hive配置

hive的配置文件放在$HIVE_HOME/conf下,里面有4个默认的配置文件模板

hive-default.xml.template                           默认模板

hive-env.sh.template                hive-env.sh默认配置

hive-exec-log4j.properties.template    exec默认配置

hive-log4j.properties.template               log默认配置

可不做任何修改hive也能运行,默认的配置元数据是存放在Derby数据库里面的,大多数人都不怎么熟悉,我们得改用mysql来存储我们的元数据,以及修改数据存放位置和日志存放位置等使得我们必须配置自己的环境,下面介绍如何配置。

(1)创建配置文件

$HIVE_HOME/conf/hive-default.xml.template  -> $HIVE_HOME/conf/hive-site.xml

$HIVE_HOME/conf/hive-env.sh.template  -> $HIVE_HOME/conf/hive-env.sh

$HIVE_HOME/conf/hive-exec-log4j.properties.template ->  $HIVE_HOME/conf/hive-exec-log4j.properties

$HIVE_HOME/conf/hive-log4j.properties.template  -> $HIVE_HOME/conf/hive-log4j.properties

(2)修改 hive-env.sh

export HADOOP_HOME=F:\hadoop\hadoop-2.7.2

export HIVE_CONF_DIR=F:\hadoop\apache-hive-2.1.1-bin\conf

export HIVE_AUX_JARS_PATH=F:\hadoop\apache-hive-2.1.1-bin\lib

(3)修改 hive-site.xml

1

2

3

4

5 hive.metastore.warehouse.dir

6

7

8

9 /user/hive/warehouse

10

11 location of default database for the warehouse

12

13

14

15

16

17 hive.exec.scratchdir

18

19

20

21 /tmp/hive

22

23 HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/ is created, with ${hive.scratch.dir.permission}.

24

25

26

27

28

29 hive.exec.local.scratchdir

30

31

32

33 F:/hadoop/apache-hive-2.1.1-bin/hive/iotmp

34

35 Local scratch space for Hive jobs

36

37

38

39

40

41 hive.downloaded.resources.dir

42

43

44

45 F:/hadoop/apache-hive-2.1.1-bin/hive/iotmp

46

47 Temporary local directory for added resources in the remote file system.

48

49

50

51

52

53 hive.querylog.location

54

55

56

57 F:/hadoop/apache-hive-2.1.1-bin/hive/iotmp

58

59 Location of Hive run time structured log file

60

61

62

63

64

65 hive.server2.logging.operation.log.location

66

67 F:/hadoop/apache-hive-2.1.1-bin/hive/iotmp/operation_logs

68

69 Top level directory where operation logs are stored if logging functionality is enabled

70

71

72

73

74

75

76

77 javax.jdo.option.ConnectionURL

78

79 jdbc:mysql://localhost:3306/hive?characterEncoding=UTF-8

80

81

82

83

84

85 javax.jdo.option.ConnectionDriverName

86

87 com.mysql.jdbc.Driver

88

89

90

91

92

93 javax.jdo.option.ConnectionUserName

94

95 root

96

97

98

99

100

101 javax.jdo.option.ConnectionPassword

102

103 root

104

105

106

107

108

109

110

111 datanucleus.autoCreateSchema

112

113 true

114

115

116

117

118

119 datanucleus.autoCreateTables

120

121 true

122

123

124

125

126

127 datanucleus.autoCreateColumns

128

129 true

130

131

132

133

134

135

136

137 hive.metastore.schema.verification

138

139 false

140

141

142

143 Enforce metastore schema version consistency.144

145 True: Verify that version information stored in metastore matches with one from Hive jars. Also disable automatic146

147 schema migration attempt. Users are required to manully migrate schema after Hive upgrade which ensures148

149 proper metastore schema migration. (Default)150

151 False: Warn if the version information stored in metastore doesn‘t match with one from in Hive jars.152

153

154

155

注:需要事先在hadoop上创建hdfs目录

(4)日志文件配置 略

5.MySQL设置

(1)创建hive数据库: create database hive default character set latin1;

(2)grant all on hive.* to [email protected]  identified by ‘hive‘;

flush privileges;

--本人用的是root用户,所以这步省略

6.

(1)启动hadoop:start-all.cmd

(2)启动metastore服务:hive --service metastore

(3)启动Hive:hive

若Hive成功启动,Hive本地模式安装完成。

7、查看mysql数据库

use hive; show tables;

8.在hive下建一张表:CREATE TABLE xp(id INT,name string) ROW FORMAT DELIMITED FIELDS TERMINATED BY ‘\t‘;

9.在MySQL中查看:select * from TBLS

安装过程中遇到的问题

(1)hive启动时报错Required table missing : "`VERSION`" in Catalog "" Schema "". DataNucleus requires this table to perform its persistence operations

参考 http://宋亚飞.中国/post/98

(2)Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient

参考http://blog.csdn.net/freedomboy319/article/details/44828337

(3)Caused by: MetaException(message:Version information not found in metastore. )

参考 http://blog.csdn.net/youngqj/article/details/19987727

(4)Hive 创建表报"Specified key was too long; max key length is 767 bytes" 错误

参考 http://blog.csdn.net/niityzu/article/details/46606581

其他参考文章:

http://www.cnblogs.com/hbwxcw/p/5960551.html hive-1.2.1安装步骤

http://blog.csdn.net/jdplus/article/details/46493553 Hive本地模式安装及遇到的问题和解决方案

http://www.coin163.com/it/x8681464370981050716/spark-Hive CentOS7伪分布式下 hive安装过程中遇到的问题及解决办法

http://www.bogotobogo.com/Hadoop/BigData_hadoop_Hive_Install_On_Ubuntu_16_04.php APACHE HADOOP : HIVE 2.1.0 INSTALL ON UBUNTU 16.04

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值