hadoop安装部署3------安装hive

安装mysql

mysql装在了master节点上

1)卸载系统自带的mysql相关安装包,仅卸载 mysql 开头的包

rpm -qa|grep -i mysql

-i 作用是不区分大小写

可以看到有两个安装包

MySQL-server-5.6.19-1.linux_glibc2.5.x86_64.rpm

MySQL-client-5.6.19-1.linux_glibc2.5.x86_64.rpm

删除这两个服务(去掉后缀)

rpm -e MySQL-client-5.6.19-1.linux_glibc2.5.x86_64

rpm -e MySQL-server-5.6.19-1.linux_glibc2.5.x86_64

查看残留的目录:

whereis mysql

然后删除mysql目录:

rm –rf /usr/lib64/mysql

删除相关文件:

rm –rf /usr/my.cnf

rm -rf /root/.mysql_sercret

最关键的:

rm -rf /var/lib/mysql

注:删除centos7自带的mariabd

rpm -qa | grep mariadb

rpm -e --nodeps mariadb-libs-5.5.44-2.e17.centos.x86_64

2)安装mysql依赖

yum install vim libaio net-tools

3)安装mysql5.5.39的rpm包

rpm -ivh /opt/MySQL-server-5.5.39-2.el6.x86_64.rpm

rpm -ivh /opt/MySQL-client-5.5.39-2.el6.x86_64.rpm

4)拷贝配置文件

cp /usr/share/mysql/my-medium.cnf /etc/my.cnf

5)启动mysql服务

service mysql start

6)设置为开机自启动

chkconfig mysql on

7)设置root用户登录密码

/usr/bin/mysqladmin -u root password 'root'

8)登录mysql 以root用户身份登录

mysql -uroot –proot

安装hive

hive装在了master节点上

1)在mysql中创建hive用户,数据库等

insert into mysql.user(Host,User,Password) values("localhost","hive",password("hive"));

create database hive;

grant all on hive.* to hive@'%' identified by 'hive';

grant all on hive.* to hive@'localhost' identified by 'hive';

flush privileges;

2)退出mysql

exit

3)添加环境变量

4)修改hive-site.xml

<property>

<name>javax.jdo.option.ConnectionURL</name>

<value>jdbc:mysql://localhost:3306/hive</value>

<description>JDBC connect string for a JDBC metastore</description>

</property>

<property>

<name>javax.jdo.option.ConnectionDriverName</name>

<value>com.mysql.jdbc.Driver</value>

<description>Driver class name for a JDBC metastore</description>

</property>

<property>

<name>javax.jdo.option.ConnectionPassword</name>

<value>hive</value>

<description>password to use against metastore database</description>

</property>

<property>

<name>hive.hwi.listen.port</name>

<value>9999</value>

<description>This is the port the Hive Web Interface will listen on</description>

</property>

<property>

<name>datanucleus.autoCreateSchema</name>

<value>true</value>

<description>creates necessary schema on a startup if one doesn't exist. set this to false, after creating it once</description>

</property>

<property>

<name>datanucleus.fixedDatastore</name>

<value>false</value>

<description/>

</property>

<property>

<name>javax.jdo.option.ConnectionUserName</name>

<value>hive</value>

<description>Username to use against metastore database</description>

</property>

<property>

<name>hive.exec.local.scratchdir</name>

<value>/opt/apache-hive-1.2.1-bin/iotmp</value>

<description>Local scratch space for Hive jobs</description>

</property>

<property>

<name>hive.downloaded.resources.dir</name>

<value>/opt/apache-hive-1.2.1-bin/iotmp</value>

<description>Temporary local directory for added resources in the remote file system.</description>

</property>

<property>

<name>hive.querylog.location</name>

<value>/opt/apache-hive-1.2.1-bin/iotmp</value>

<description>Location of Hive run time structured log file</description>

</property>

5)拷贝mysql-connector-java-5.1.6-bin.jar 到hive 的lib下面

mv /home/hadoop/Desktop/mysql-connector-java-5.1.6-bin.jar /opt/ apache-hive-1.2.1-bin/lib/

6)把jline-2.12.jar拷贝到hadoop相应的目录下,替代jline-0.9.94.jar,否则启动会报错

cp /opt/apache-hive-1.2.1-bin/lib/jline-2.12.jar /opt/hadoop-2.6.3/share/hadoop/yarn/lib/

mv /opt/hadoop-2.6.3/share/hadoop/yarn/lib/jline-0.9.94.jar /opt/hadoop-2.6.3/share/hadoop/yarn/lib/jline-0.9.94.jar.bak

7)创建hive临时文件夹

mkdir /opt/apache-hive-1.2.1-bin/iotmp

8)启动测试hive

启动hadoop后,执行hive命令

hive

测试输入 show database;

hive> show databases;

OK

default

Time taken: 0.907 seconds, Fetched: 1 row(s)

转载于:https://www.cnblogs.com/niuxiaoha/p/5303780.html

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值