hadoop 2.7.3 mysql_搭建Hadoop2.7.3+Hive2.1.1及MySQL(配置Hive+MySQL+Connector)(三)

本文详细介绍了如何在Hadoop 2.7.3和Hive 2.1.1环境下,集成MySQL作为元数据存储,并提供了从下载MySQL连接器到配置Hive-site.xml,再到启动服务和测试环境的完整步骤。过程中提到了可能遇到的问题及解决方案,如安全模式、数据导入错误等。
摘要由CSDN通过智能技术生成

搭建Hadoop2.7.3+Hive2.1.1及MySQL(配置Hive+Hadoop)(二)

准备工作下载最新连接器地址

https://dev.mysql.com/downloads/connector/j/

例子:下载mysql-connector-java-5.1.41.tar

1、解压连接器connector文件

1.1、解压

[root@localhost Software]# tar xzfmysql-connector-java-5.1.41.tar.gz

[root@localhost Software]# cd mysql-connector-java-5.1.41/

1.2、查看文件夹

[root@localhostmysql-connector-java-5.1.41]# ll

1.3、Copy到hive/lib路径下

[root@localhost Software]# cpmysql-connector-java-5.1.41/mysql-connector-java-5.1.41-bin.jar/usr/hive/lib/mysql-connector-java-5.1.41-bin.jar

2、登陆MySQL创建数据库:hive_db(注意配置hive-site.xml时有指定)

2.1、用户名:root 密码:password,另开一个终端登陆MySQL,创建数据库hive_db

[root@localhost hive]# mysql -u root -ppassword

mysql> create database hive_db;

3、改配置文件hive-site.xml以下只列出改动的配置项,其它保留默认

hive.metastore.warehouse.dir

/usr/hive/warehouse

location of default database for the warehouse

hive.metastore.local

true

Use false if a production metastore server is used

hive.exec.scratchdir

/tmp/hive

HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/ is created, with ${hive.scratch.dir.permission}.

javax.jdo.option.ConnectionURL

jdbc:mysql://localhost:3306/hive_db?createDatabaseIfNoExist=true

Roy

JDBC connect string for a JDBC metastore.

To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.

For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.

javax.jdo.option.ConnectionDriverName

com.mysql.jdbc.Driver

User-Defined(Roy) Driver class name for a JDBC metastore

javax.jdo.option.ConnectionUserName

root

User-defined(Roy)Username to use against metastore database

javax.jdo.option.ConnectionPassword

password

User-defined(Roy)password to use against metastore database

4、使用schematool初始化

[root@localhost hive]# schematool -dbType mysql -initSchema

--显示成功

schemaTool completed

5、启动hive服务端程序5.1、启动hive服务端

[root@localhost hive]# hive --servicemetastore &

--屏幕提示信息不显示时,按ctrl+c退出5.2、查看进程信息

[root@localhost hive]# jps--显示进程信息多了(RunJar)

51280 Jps

5985 SecondaryNameNode

6226 ResourceManager

45766 DataNode

5753 NameNode

51194 RunJar

6348 NodeManager

5.3、有需要时,可启动hive  远程服务 (端口号10000)

[root@localhost hive]# hive --servicehiveserver &

6、测试环境配置是否成功

6.1、准备导入文本文件/root/桌面/Test/wc-in/a.txt

格式:

1,h

2,i

3,v

4,e

6.2、登陆hive成功后,测试创建表

root@localhost hadoop]# hive6.2.1、创建表及指定逗号(,)为分隔符

hive> create table a(id int,name string)

> row format delimited fields terminated by ‘,‘;--显示信息

OK

Time taken: 0.288 seconds

6.2.2、导入文件a.txt

hive> load data local inpath ‘/root/桌面/Test/wc-in/a.txt‘ into table a;--显示信息

Loading data to table default.a

OK

Time taken: 0.763 seconds6.2.3、查看效果

hive> select * from a;

--显示信息

OK

1 h

2 i

3 v

4 e

Time taken: 0.309 seconds, Fetched: 4row(s)

6.3、在Hive内使用dfs命令6.3.1、查看a表dfs存储路径

hive> dfs -ls /usr/hive/warehouse/a;--显示信息

Found 1 items

-rw-r--r-- 1 root supergroup 16 2017-03-08 17:46/usr/hive/warehouse/a/a.txt

6.3.2、查看文件内容

hive> dfs -cat /usr/hive/warehouse/a/*;

--显示信息

1,h

2,i

3,v

4,e

7、登陆MySQL查看创建表

[root@localhost conf]# mysql -u root -ppasswordmysql> use hive_db;

mysql> select TBL_ID, CREATE_TIME,DB_ID, OWNER, TBL_NAME,TBL_TYPE from TBLS;--显示信息

+--------+-------------+-------+-------+----------+---------------+

| TBL_ID | CREATE_TIME | DB_ID | OWNER |TBL_NAME | TBL_TYPE |

+--------+-------------+-------+-------+----------+---------------+

| 37 | 1488966386 | 1 | root | a | MANAGED_TABLE |

+--------+-------------+-------+-------+----------+---------------+

1 row in set (0.03 sec)

8、在hdfs查看生成文件(同上步骤[6.3])8.1、查看a表存储路径

[root@localhost hadoop]# hdfs dfs -ls/usr/hive/warehouse/a--显示信息

Found 1 items

-rw-r--r-- 1 root supergroup 162017-03-08 17:46 /usr/hive/warehouse/a/a.txt

8.2、查看内容

[root@localhost hadoop]# hdfs dfs -cat /usr/hive/warehouse/a/*

--显示信息

1,h

2,i

3,v

4,e

常见问题处理:

1、启动hive时报错

[root@localhost hive]# hive

--显示报错信息

Caused by:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):Cannot create directory /tmp/hive/root/24f1d91f-f32b-47e1-824d-ba26b02bd13e.Name node is in safe mode.

原因:hadoop为安全模式

--解决方法:

关闭安全模式

[root@localhost hadoop]# hadoop dfsadmin-safemode leave--显示信息

DEPRECATED: Use of this script to executehdfs command is deprecated.

Instead use the hdfs command for it.

Safe mode is OFF

2、在导入数据时出错信息

hive> load data local inpath ‘/root/桌面/Test/wc-in/a.txt‘ into table a;

--显示报错信息

FAILED: Execution Error, return code 1 fromorg.apache.hadoop.hive.ql.exec.MoveTask.org.apache.hadoop.ipc.RemoteException(java.io.IOException): File/usr/hive/warehouse/a/a_copy_2.txt could only be replicated to 0 nodes insteadof minReplication (=1). There are 0datanode(s) running and no node(s) are excluded in this operation.

原因:hadoop没有启动datanote

解决方法:

[root@localhost hive]# start-dfs.sh

[root@localhost hive]# jps

--显示信息

51152 Jps

5985 SecondaryNameNode

6226 ResourceManager

45766 DataNode

5753 NameNode

6348 NodeManager

搭建Hadoop2.7.3+Hive2.1.1及MySQL(配置Hive+MySQL+Connector)(三)

标签:tool   span   sch   replicate   查看内容   mat   plain   ecif   hadoop

本条技术文章来源于互联网,如果无意侵犯您的权益请点击此处反馈版权投诉

本文系统来源:http://blog.csdn.net/roy_88/article/details/60884980

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值