四、hive组件的安装与配置

四、hive组件的安装与配置

前提:三台虚拟机全安装好jdk,hadoop,已经配置好完全分布式环境

(一)、下载和解压相关文件

解压hive

[root@master ~]# tar -zxvf apache-hive-2.0.0-bin.tar.gz -C /usr/local/src/

修改hive文件夹名字

mv /usr/local/src/apache-hive-2.0.0-bin/ /usr/local/src/hive

修改hive目录归属用户和用户组为hadoop

chown -R hadoop:hadoop /usr/local/src/hive

(二)、设置hive环境

1、卸载MariaDB数据库

查看mariadb安装情况

rpm -qa | grep mariadb

卸载mariadb

[root@master ~]# rpm -e --nodeps mariadb-libs

2、安装MySQL数据库
[root@master ~]# rpm -ivh mysql*.rpm

将以下配置添加到/etc/my.cng文件symbolic-links=0配置信息的下方

[root@master ~]# vim /etc/my.cnf

default-storage-engine=innodb
innodb_file_per_table
collation-server=utf8_general_ci
init-connect='SET NAMES utf8'
character-set-server=utf8

启动mysql

[root@master ~]# systemctl start mysql
[root@master ~]# systemctl status mysqld

查看3306端口状态

[root@master ~]# ss -anpt|grep 3306

查询MySQL数据库默认密码

[root@master ~]# cat /var/log/mysqld.log |grep password
2021-10-15T06:54:54.931372Z 1 [Note] A temporary password is generated for root@localhost: fegRe_lh;3aT

进行MySQL初始化

[root@master ~]# mysql_secure_installation

[root@master ~]# mysql_secure_installation

Securing the MySQL server deployment.

Enter password for user root:
#输入默认密码
The existing password for the user account root has expired. Please set a new password.

New password:
#新密码
Re-enter new password:#新密码
The 'validate_password' plugin is installed on the server.
The subsequent steps will run with the existing configuration
of the plugin.
Using existing password for root.

Estimated strength of the password: 100
Change the password for root ? ((Press y|Y for Yes, any other key for No) : y

New password:
#新密码
Re-enter new password:
#新密码
Estimated strength of the password: 100
Do you wish to continue with the password provided?(Press y|Y for Yes, any other key for No) : y
By default, a MySQL installation has an anonymous user,
allowing anyone to log into MySQL without having to have
a user account created for them. This is intended only for
testing, and to make the installation go a bit smoother.
You should remove them before moving into a production
environment.

Remove anonymous users? (Press y|Y for Yes, any other key for No) : y
Success.


Normally, root should only be allowed to connect from
'localhost'. This ensures that someone cannot guess at
the root password from the network.

Disallow root login remotely? (Press y|Y for Yes, any other key for No) : n

 ... skipping.
By default, MySQL comes with a database named 'test' that
anyone can access. This is also intended only for testing,
and should be removed before moving into a production
environment.


Remove test database and access to it? (Press y|Y for Yes, any other key for No) : y
 - Dropping test database...
Success.

 - Removing privileges on test database...
Success.

Reloading the privilege tables will ensure that all changes
made so far will take effect immediately.

Reload privilege tables now? (Press y|Y for Yes, any other key for No) : y
Success.

All done!

添加root用户从本地和远程访问MySQL数据库表单的授权

[root@master ~]# mysql -uroot -p

添加root用户本地访问权限

mysql> grant all privileges on *.* to root@'localhost' identified by 'Password123$';
Query OK, 0 rows affected, 1 warning (0.00 sec)

添加root用户远程访问授权

mysql> grant all privileges on *.* to root@'%' identified by 'Password123$';
Query OK, 0 rows affected, 1 warning (0.00 sec)

刷新授权

mysql> flush privileges
    -> ;
Query OK, 0 rows affected (0.00 sec)

查询root用户授权情况

mysql> select user,host from mysql.user where user='root';
+------+-----------+
| user | host      |
+------+-----------+
| root | %         |
| root | localhost |
+------+-----------+
2 rows in set (0.00 sec)

退出MySQL

mysql> exit
Bye

3、配置hive组件

设置hive环境变量并使其生效

[root@master ~]# vim /etc/profile
export HIVE_HOME=/usr/local/src/hive
export PATH=$PATH:$HIVE_HOME/bin
[root@master ~]# source /etc/profile

切换到hadoop用户

su - hadoop

将/usr/local/src/hive/conf/文件夹下hive-default.xml.template文件更名为hive-site.xml

[hadoop@master ~]$ cp /usr/local/src/hive/conf/hive-default.xml.template /usr/local/src/hive/conf/hive-site.xml

修改hive-site.xml文件实现hive连接MySQL数据库,并设定hive临时文件存储路径

<!--设置MySQL数据库链接	490-->
 489   <property>
 490     <name>javax.jdo.option.ConnectionURL</name>
 491     <value>jdbc:mysql://master:3306/hive?createDatabaseIfNotExist=true&amp;u     seSSL=false</value>
 492     <description>JDBC connect string for a JDBC metastore</description>
 493   </property>

<!--配置MySQL root的密码		476-->
 474   <property>
 475     <name>javax.jdo.option.ConnectionPassword</name>
 476     <value>Password123$</value>
 477     <description>password to use against metastore database</description>
 478   </property>

<!--验证元组数据储存版本一致性若默认false则不用修改	654-->
 653   <property>
 654     <name>hive.metastore.schema.verification</name>
 655     <value>false</value>
 656     <description>
 657       Enforce metastore schema version consistency.
 658       True: Verify that version information stored in metastore matches with      one from Hive jars.  Also disable automatic
 659             schema migration attempt. Users are required to manually migrate      schema after Hive upgrade which ensures
 660             proper metastore schema migration. (Default)
 661       False: Warn if the version information stored in metastore doesn't mat     ch with one from in Hive jars.
 662     </description>

<!--配置数据库驱动	879-->
 878   <property>
 879     <name>javax.jdo.option.ConnectionDriverName</name>
 880     <value>com.mysql.jdbc.Driver</value>
 881     <description>Driver class name for a JDBC metastore</description>
 882   </property>

<!--配置数据库用户名javax.jao.option.ConnectionUserName为root	905-->
  903   <property>
 904     <name>javax.jdo.option.ConnectionUserName</name>
 905     <value>root</value>
 906     <description>Username to use against metastore database</description>
 907   </property>
     
<!--将以下位置的${system:java.io.tmpdir}/${system:user.name}替换为“/usr/local/src/hice/tmp”-->
1433   <property>	#1435
1434     <name>hive.querylog.location</name>
1435     <value>/usr/local/src/hive/tmp</value>
1436     <description>Location of Hive run time structured log file</description>
1437   </property>

     
  42   <property>	#45
  43     <name>hive.exec.local.scratchdir</name>
  44     <value>/usr/local/src/hive/tmp</value>
  45     <description>Local scratch space for Hive jobs</description>
  46   </property>

     
     
  47   <property>	#49
  48     <name>hive.downloaded.resources.dir</name>
  49     <value>/usr/local/src/hive/tmp/resources/dir</value>
  50     <description>Temporary local directory for added resources in the remote      file system.</description>
  51   </property>

     
     
3422   <property>	#3424
3423     <name>hive.server2.logging.operation.log.location</name>
3424     <value>/usr/local/src/hive/tmp/operation_logs</value>
3425     <description>Top level directory where operation logs are stored if logg     ing functionality is enabled</description>
3426   </property>

     

在hive安装目录中创建临时文件夹tmp

[hadoop@master ~]$ mkdir /usr/local/src/hive/tmp

4、初始化hive元数据

返回root用户

[hadoop@master ~]$ exit

将MySQL驱动文件复制到hive安装目录的/usr/local/src/hive/lib目录下

[root@master ~]# cp ~/mysql-connector-java-5.1.47.jar /usr/local/src/hive/lib

切换到Hadoop用户

su - hadoop

删除 /usr/local/src/hadoop/yarn/lib/jline…jar 文件

[hadoop@master ~]$ rm -f /usr/local/src/hadoop/share/hadoop/yarn/lib/jline

启动Hadoop相关进程

[hadoop@master ~]$ start-all.sh		#master节点
[hadoop@master ~]$ jps
2272 SecondaryNameNode
2675 Jps
2411 ResourceManager
2079 NameNode

[hadoop@slave1 ~]$ jps		#slave1节点
1459 Jps
1205 DataNode
1342 NodeManager

[hadoop@slave2 ~]$ jps		#slave2节点
1461 Jps
1208 DataNode
1322 NodeManager
[hadoop@slave2 ~]$


初始化hive元数据

[hadoop@master ~]$ schematool -initSchema -dbType mysql

最后一行为:schemaTool completed 表示成功

5、启动hive

在系统的任意目录下,执行hive命令即可启动hive

[hadoop@master ~]$ hive
which: no hbase in (/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/local/src/jdk1.8/bin:/usr/local/src/hadoop/bin:/usr/local/src/hadoop/sbin:/usr/local/src/hive/bin:/home/hadoop/.local/bin:/home/hadoop/bin)
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/src/hive/lib/hive-jdbc-2.0.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/src/hive/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/src/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

Logging initialized using configuration in jar:file:/usr/local/src/hive/lib/hive-common-2.0.0.jar!/hive-log4j2.properties
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
hive>exit;	#退出

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值