Hive安装配置centos7

修改hadoop配置信息

[hadoop@master ~]cd /home/hadoop/software/hadoop-2.7.3/etc/hadoop/
[hadoop@master ~]vi core-site.xml加入如下配置项

<property>
<name>hadoop.proxyuser.hadoop.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hadoop.groups</name>
<value>*</value>
</property>

1、master

1)上传到software, 解压Hive:
解压
[hadoop@master software]$ tar -zxvf apache-hive-2.1.1-bin.tar.gz
创建软连接
[hadoop@master software]$ ln -s apache-hive-2.1.1-bin hive

2)配置文件重命名
[hadoop@master software]$ cd /home/hadoop/software/hive/conf/
[hadoop@master conf]$ cp hive-env.sh.template hive-env.sh
[hadoop@master conf]$ cp hive-log4j2.properties.template hive-log4j2.properties
[hadoop@master conf]$ cp hive-exec-log4j2.properties.template hive-exec-log4j2.properties

3)修改hive-env.sh
[hadoop@master conf]$ vi hive-env.sh
export JAVA_HOME=/usr/java/jdk1.8.0_171
export HADOOP_HOME=/home/hadoop/software/hadoop-2.7.3
export HIVE_HOME=/home/hadoop/software/hive
export HIVE_CONF_DIR=/home/hadoop/software/hive/conf

4)添加hive-site.xml
[hadoop@master conf]$ vi hive-site.xml

<?xml version="1.0"?> 
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 
<configuration>
<property>
    <name>hive.exec.scratchdir</name>
    <value>/home/hadoop/software/hive/iotmpdir</value>
    <description>HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/&lt;username&gt; is created, with ${hive.scratch.dir.permission}.</description>
  </property>
  <property>
    <name>hive.exec.local.scratchdir</name>
    <value>/home/hadoop/software/hive/iotmpdir</value>
    <description>Local scratch space for Hive jobs</description>
  </property>
  <property>
    <name>hive.downloaded.resources.dir</name>
    <value>/home/hadoop/software/hive/iotmpdir</value>
    <description>Temporary local directory for added resources in the remote file system.</description>
  </property>
<property>
    <name>hive.querylog.location</name>
    <value>/home/hadoop/software/hive/iotmpdir</value>
    <description>Location of Hive run time structured log file</description>
  </property>
<property>
    <name>hive.server2.logging.operation.log.location</name>
  <value>/home/hadoop/software/hive/iotmpdir/operation_logs</value>
    <description>Top level directory where operation logs are stored if logging functionality is enabled</description>
  </property>
<property>
    <name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://192.168.6.102:3306/hive?=createDatabaseIfNotExsit=true</value>
  <description>JDBC connect string for a JDBC metastore</description>
 </property>
<property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.jdbc.Driver</value>
    <description>Driver class name for a JDBC metastore</description>
</property>
<property>
   <name>javax.jdo.option.ConnectionUserName</name>
    <value>hive</value>
   <description>username to use against metastore database</description>
</property>
<property>
   <name>javax.jdo.option.ConnectionPassword</name>
   <value>123456</value>
    <description>password to use against metastore database</description>
</property>
<property>
  <name>hive.metastore.warehouse.dir</name>
  <!-- base hdfs path -->
  <value>/user/hive/warehouse</value>
  <description>location of default database for the warehouse</description>
</property>
<property>
  <name>hive.metastore.uris</name>
  <value>thrift://slave1:9083</value>
</property>
</configuration>

5)上传mysql-connector-java-5.1.5-bin.jar驱动到/home/hadoop/software/hive/lib/
6)将配置好的hive拷贝到其他机器里
[hadoop@master conf]$ cd /home/hadoop/software/
[hadoop@master software]$ scp -r apache-hive-2.1.1-bin slave1:~/software/
[hadoop@master software]$ scp -r apache-hive-2.1.1-bin slave2:~/software/

7)分别在slave1,slave2上建hive的软链接
[hadoop@slave1 ~]$ ln -s /home/hadoop/software/apache-hive-2.1.1-bin /home/hadoop/software/hive
[hadoop@slave2 ~]$ ln -s /home/hadoop/software/apache-hive-2.1.1-bin /home/hadoop/software/hive
8)配置环境变量
[hadoop@master ~]$ vi .bashrc

将下面两行添加到末尾

export HIVE_HOME=/home/hadoop/software/hive
export PATH=$PATH:$HIVE_HOME/bin

使环境变量生效:
[hadoop@master ~]$ . .bashrc
9)复制.bashrc文件给另外两台:
[hadoop@master ~]$ scp -r .bashrc slave1:/home/hadoop/
[hadoop@master ~]$ scp -r .bashrc slave2:/home/hadoop/
在slave1和2中使.bashrc文件生效:
[hadoop@slave1 ~]$ . .bashrc
[hadoop@slave2 ~]$ . .bashrc
2、slave2:安装mysql
切换到root用户下
1)下载mysql源
[root@slave2 ~]$ wget http://dev.mysql.com/get/mysql57-community-release-el7-8.noarch.rpm

在这里插入图片描述
2)安装源
[root@slave2 ~]# rpm -ivh mysql57-community-release-el7-8.noarch.rpm
在这里插入图片描述
3)查看是否有包
[root@slave2 ~]# ls /etc/yum.repos.d
在这里插入图片描述
4)安装mysql
[root@slave2 ~]# yum install mysql-community-server
在这里插入图片描述
5)开启mysql服务
重载所有修改过的配置文件:
[root@slave2 ~]# systemctl daemon-reload
开启服务:
[root@slave2 ~]# systemctl start mysqld
开机自启:
[root@slave2 ~]# systemctl enable mysqld
6)获取初始密码
[root@slave2 ~]# grep temporary password /var/log/mysqld.log
在这里插入图片描述
7)登录mysql
[root@slave2 ~]# mysql -uroot -p
在这里插入图片描述

8)mysql密码安全策略
mysql> set global validate_password_policy=0;
mysql> set global validate_password_length=4;
mysql> alter user ‘root’@‘localhost’ identified by ‘123456’;
mysql> \q
在这里插入图片描述
9)设置远程登录
[root@slave2 ~]# mysql -uroot -p123456
mysql> create database hive;
mysql> create user hive identified by ‘123456’;
mysql> grant all PRIVILEGES on . to hive@’%’ identified by ‘123456’;
mysql> flush privileges;
在这里插入图片描述
3、slave1:初始化数据库,生成元数据
[hadoop@slave1 ~]$ cd /home/hadoop/software/hive/bin/
[hadoop@slave1 bin]$ ./schematool -initSchema -dbType mysql
在这里插入图片描述
4、启动hive
(注意在不同机器切换)
在hdfs上创建hive.metastore.warehouse.dir目录,并修改权限
[hadoop@master ~] hadoop fs -mkdir -p /user/hive/warehouse
[hadoop@master ~] hadoop fs -chown -R hive:hive /user/hive
1)在slave1中启动:
[hadoop@slave1 ~]$ hive --service metastore
在这里插入图片描述
等待一段时间,另建一个slave1的连接 运行以下命令
查看是否成功启动,这里查如果没有结果,需要等待一会端口开启,之后再netstat
在这里插入图片描述
2)在master中启动:
[hadoop@master ~]$ hive --service hiveserver2
在这里插入图片描述
等待一段时间,另建一个master的连接 运行以下命令
查看是否成功启动,这里查如果没有结果,需要等待一会端口开启,之后再netstat
在这里插入图片描述

在master输入命令hive进入hive数据库
[hadoop@master ~]$ hive
hive> show databases;
OK
default
Time taken: 1.354 seconds, Fetched: 1 row(s)

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值