Atlas学习一

一、apache-atlas-1.1.0-sources源码编译打包

源码包下载地址:apache-atlas-1.1.0-sources.tar.gz

  Atlas目前只能自行编译源码进行安装,Atlas使用java开发,但是以python方式启动,所以安装之前,环境必须满足以下需求:

  • jdk 1.8+
  • maven3.x
  • python2.7+

  我把源码包上传到了CentOS服务器上(CentOS Linux release 7.9.2009 (Core))进行编译打包:

tar xvfz apache-atlas-1.1.0-sources.tar.gz
cd apache-atlas-sources-1.1.0/
export MAVEN_OPTS="-Xms2g -Xmx2g"
# 构建会启动单元测试,时间比较长,可以添加忽略测试,加参数-DskipTests。如果一次构建失败了。可以不进行clean,直接install
mvn clean -DskipTests install

最后成功截图:
在这里插入图片描述

1.遇到的问题:

问题1:
在这里插入图片描述
解决:mvn install:install-file -Dfile=je-7.4.5.jar -DgroupId=com.sleepycat -DartifactId=je -Dversion=7.4.5 -Dpackaging=jar

问题2:
在这里插入图片描述
解决:

[root@node01 huiq]# echo $MAVEN_HOME
/opt/tools/apache-maven-3.5.4/
[root@node01 huiq]# vim /opt/tools/apache-maven-3.5.4/conf/settings.xml
    <!-- 阿里云仓库 -->
	  <mirror>
      <id>nexus-aliyun</id>
      <mirrorOf>*</mirrorOf>
      <name>Nexus aliyun</name>
      <url>http://maven.aliyun.com/nexus/content/groups/public</url>
    </mirror>
    
	  <mirror>
      <id>aliyunmaven</id>
      <mirrorOf>*</mirrorOf>
      <name>阿里云公共仓库</name>
      <url>https://maven.aliyun.com/repository/public</url>
    </mirror>
    
    <mirror>
      <id>aliyunmaven</id>
      <mirrorOf>*</mirrorOf>
      <name>阿里云谷歌仓库</name>
      <url>https://maven.aliyun.com/repository/google</url>
    </mirror>
    
    <mirror>
      <id>aliyunmaven</id>
      <mirrorOf>*</mirrorOf>
      <name>阿里云阿帕奇仓库</name>
      <url>https://maven.aliyun.com/repository/apache-snapshots</url>
    </mirror>
    
    <mirror>
      <id>aliyunmaven</id>
      <mirrorOf>*</mirrorOf>
      <name>阿里云spring仓库</name>
      <url>https://maven.aliyun.com/repository/spring</url>
    </mirror>
    
    <mirror>
      <id>aliyunmaven</id>
      <mirrorOf>*</mirrorOf>
      <name>阿里云spring插件仓库</name>
      <url>https://maven.aliyun.com/repository/spring-plugin</url>
    </mirror>

问题3:
在这里插入图片描述
解决:mvn install:install-file -Dfile=pentaho-aggdesigner-algorithm-5.1.5-jhyde.jar -DgroupId=org.pentaho -DartifactId=pentaho-aggdesigner-algorithm -Dversion=5.1.5-jhyde -Dpackaging=jar

问题4:
在这里插入图片描述
解决:mvn install:install-file -Dfile=sqoop-1.4.6.2.3.99.0-195.jar -DgroupId=org.apache.sqoop -DartifactId=sqoop -Dversion=1.4.6.2.3.99.0-195 -Dpackaging=jar

# 可以使用以下几种方式进行打包(1.使用外部的hbase 2.内嵌hbase)
mvn clean package -DskipTests -Pdist,external-hbase-solr
mvn clean package -DskipTests -Pdist,embedded-hbase-solr(我选择比较简单的内嵌方式打包)

注意1:以上操作是在Linux环境上,后来我又在本地Windows7环境上编译了下,遇到了同样上面的4个问题,解决方法也一样。但在执行mvn clean -DskipTests installmvn clean -DskipTests package -Pdist,embedded-hbase-solr命令时又遇到了一个不一样的问题:Some Enforcer rules have failed. Look above for specific messages explaining why the rule failed.
在这里插入图片描述
解决:输入以下命令执行:mvn clean install -DskipTests -Denforcer.skip=truemvn clean -DskipTests package -Pdist,embedded-hbase-solr -Denforcer.skip=true

注意2后来又在本地Windows7环境上的Idea中编译了apache-atlas-2.1.0-sources在上面的基础上也可以成功。

参考:编译项目报错:Some Enforcer rules have failed. Look above for specific messages explaining why the rule failed.

2.启动:
$ cd apache-atlas-sources-1.1.0/distro/target/apache-atlas-1.1.0-server/apache-atlas-1.1.0/bin
[root@node01 bin]# python atlas_start.py
configured for local hbase.
hbase started.
configured for local solr.
solr started.
setting up solr collections...
starting atlas on host localhost
starting atlas on port 21000
..............................
Apache Atlas Server started!!!

二、ubuntu-18.04 开发环境搭建

  我这里用VMware Workstation 15 Pro版本为15.5.0 build-14665864安装了ubuntu-18.04做测试环境,安装成功后使用快捷键Ctrl+Alt+T可打开终端。这里需要注意一下,一开始我用VMware-workstation-full-12.0.0-2985596.exe安装ubuntu-18.04失败,网上查了下说是VMware版本太低导致的,于是我重新安装了15的版本可以成功。(参考:not syncing : corrupted stack end detected inside scheduler解决办法ubuntu 18.04版本虚拟机安装时出现错误not syncing : corrupted stack end detected inside scheduler的解决方式vmware安装ubuntu18.04总是 panic -not syncing:corrupted stack end detected inside schedule

  安装好ubuntu-18.04前期可能出现一系列的问题,解决可参考:在Ubantu18.04上开启ssh服务,实现远程连接ubuntu终端su认证失败:允许su到root的方法Ubuntu下root账户无法使用xshell远程连接解决方法总结

  由于搭建atlas基础环境,不需要分布式大数据平台,这里就在一台虚拟机上进行相应组件的安装:

  • hadoop-3.2.2.tar.gz
  • apache-hive-3.1.2-bin.tar.gz
  • hbase-2.0.6-bin.tar.gz
  • kafka_2.11-2.0.0.tgz
  • jdk-8u231-linux-x64.tar.gz
  • elasticsearch-7.14.1-linux-x86_64.tar.gz

  上传安装包到服务器指定目录:
在这里插入图片描述

1.安装JDK1.8:
root@ubuntu:/opt/bigdata-packages/java# tar -zxf jdk-8u231-linux-x64.tar.gz

root@ubuntu:/opt/bigdata-packages/java# vim /etc/profile
# 添加如下配置
export JAVA_HOME=/opt/bigdata-packages/java/jdk1.8.0_231
export PATH=$PATH:$JAVA_HOME/bin

root@ubuntu:/opt/bigdata-packages/java# source /etc/profile

root@ubuntu:/opt/bigdata-packages/java# java -version
java version "1.8.0_231"
Java(TM) SE Runtime Environment (build 1.8.0_231-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.231-b11, mixed mode)
2.安装zookeeper:
root@ubuntu:/opt/bigdata-packages/zookeeper# tar -zxf zookeeper-3.4.6.tar.gz

root@ubuntu:/opt/bigdata-packages/zookeeper/zookeeper-3.4.6/conf# cp zoo_sample.cfg zoo.cfg

root@ubuntu:/opt/bigdata-packages/zookeeper/zookeeper-3.4.6/conf# mkdir -p /opt/bigdata-packages/zookeeper/zkdata

root@ubuntu:/opt/bigdata-packages/zookeeper/zookeeper-3.4.6/conf# vim /etc/profile
# 添加如下配置
export ZK_HOME=/opt/bigdata-packages/zookeeper/zookeeper-3.4.6
export PATH=$PATH:$ZK_HOME/bin

root@ubuntu:/opt/bigdata-packages/zookeeper/zookeeper-3.4.6/conf# source /etc/profile

root@ubuntu:/opt/bigdata-packages/zookeeper/zookeeper-3.4.6/bin# zkServer.sh start
JMX enabled by default
Using config: /opt/bigdata-packages/zookeeper/zookeeper-3.4.6/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

root@ubuntu:/opt/bigdata-packages/zookeeper/zookeeper-3.4.6/bin# jps
64480 Jps
64463 QuorumPeerMain
3.安装hadoop:
(1)配置hdfs:
root@ubuntu:/opt/bigdata-packages/hadoop# tar -zxf hadoop-3.2.2.tar.gz

root@ubuntu:/opt/bigdata-packages/hadoop/hadoop-3.2.2# vim etc/hadoop/core-site.xml 
#修改 etc/hadoop/core-site.xml 文件,增加以下配置(一开始的配置文件没有内容的!!!)
<configuration>
	<!-- 指定HDFS老大(namenode)的通信地址 -->
	<property>
		<name>fs.defaultFS</name>
		<value>hdfs://ubuntu:9000</value>
	</property>
	<!-- 指定hadoop运行时产生文件的存储路径 -->
	<property>
		<name>hadoop.tmp.dir</name>
		<value>/usr/hadoop/tmp</value>
	</property>
</configuration>

root@ubuntu:/opt/bigdata-packages/hadoop/hadoop-3.2.2# vim etc/hadoop/hdfs-site.xml
#修改 etc/hadoop/hdfs-site.xml 文件,增加以下配置
<configuration>
	<property>
		<name>dfs.name.dir</name>
		<value>/usr/hadoop/hdfs/name</value>
		<description>namenode上存储hdfs名字空间元数据 </description>
	</property>
	<property>
		<name>dfs.data.dir</name>
		<value>/usr/hadoop/hdfs/data</value>
		<description>datanode上数据块的物理存储位置</description>
	</property>
	<!-- 设置hdfs副本数量 -->
	<property>
		<name>dfs.replication</name>
		<value>1</value>
	</property>
</configuration>

root@ubuntu:/opt/bigdata-packages/hadoop/hadoop-3.2.2# vim etc/hadoop/hadoop-env.sh
# 修改hadoop-env.sh配置文件,不然运行时会报以下错误:localhost: Error: JAVA_HOME is not set and could not be found.将配置文件中JAVA_HOME改为实际的JAVA_HOME
export JAVA_HOME=/opt/bigdata-packages/java/jdk1.8.0_231

# hdfs启动与停止,第一次启动hdfs需要格式化,之后启动就不需要了
root@ubuntu:/opt/bigdata-packages/hadoop/hadoop-3.2.2# vim sbin/start-dfs.sh
root@ubuntu:/opt/bigdata-packages/hadoop/hadoop-3.2.2# vim sbin/stop-dfs.sh
# 修改start-dfs.sh stop-dfs.sh 中添加:
HDFS_DATANODE_USER=root
HADOOP_SECURE_DN_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root

root@ubuntu:/opt/bigdata-packages/hadoop/hadoop-3.2.2# ./bin/hdfs namenode -format

# 配置hadoop环境变量
root@ubuntu:/opt/bigdata-packages/hadoop/hadoop-3.2.2# vim /etc/profile
# 添加如下配置
export HADOOP_HOME=/opt/bigdata-packages/hadoop/hadoop-3.2.2
export PATH=$PATH:$HADOOP_HOME/bin:$$HADOOP_HOME/sbin
root@ubuntu:/opt/bigdata-packages/hadoop/hadoop-3.2.2# source /etc/profile

root@ubuntu:/opt/bigdata-packages/hadoop/hadoop-3.2.2# ./sbin/start-dfs.sh 

注意:要添加在文件开头而不是结尾,如果添加在文件开头则会报错
在这里插入图片描述
遇到的问题:ubuntu: root@ubuntu: Permission denied (publickey,password).
在这里插入图片描述
解决:
在这里插入图片描述
  访问hdfs:http://192.168.223.128:9870/
在这里插入图片描述

(2)配置yarn:
root@ubuntu:/opt/bigdata-packages/hadoop/hadoop-3.2.2/etc/hadoop# vim mapred-site.xml
#增加以下配置
<configuration>
	<!-- 通知框架MR使用YARN -->
	<property>
		<name>mapreduce.framework.name</name>
		<value>yarn</value>
	</property>
</configuration>

root@ubuntu:/opt/bigdata-packages/hadoop/hadoop-3.2.2/etc/hadoop# vim yarn-site.xml 
#增加以下配置
<configuration>
	<!-- reducer取数据的方式是mapreduce_shuffle -->
	<property>
		<name>yarn.nodemanager.aux-services</name>
		<value>mapreduce_shuffle</value>
	</property>
</configuration>

root@ubuntu:/opt/bigdata-packages/hadoop/hadoop-3.2.2/etc/hadoop# vim ../../sbin/start-yarn.sh 
root@ubuntu:/opt/bigdata-packages/hadoop/hadoop-3.2.2/etc/hadoop# vim ../../sbin/stop-yarn.sh 
# 添加如下内容
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root

root@ubuntu:/opt/bigdata-packages/hadoop/hadoop-3.2.2# ./sbin/start-yarn.sh 
Starting resourcemanager
Starting nodemanagers

root@ubuntu:/opt/bigdata-packages/hadoop/hadoop-3.2.2# jps
67537 NodeManager
67684 Jps
67383 ResourceManager
66555 DataNode
66748 SecondaryNameNode
66399 NameNode
64463 QuorumPeerMain

  登录yarn界面:http://192.168.223.128:8088/cluster
在这里插入图片描述

4.安装hbase:
root@ubuntu:/opt/bigdata-packages/hbase# tar -zxf hbase-2.0.6-bin.tar.gz

root@ubuntu:/opt/bigdata-packages/hbase# vim /etc/profile
# 添加如下配置
export HBASE_HOME=/opt/bigdata-packages/hbase/hbase-2.0.6
export PATH=$PATH:$HBASE_HOME/bin
root@ubuntu:/opt/bigdata-packages/hbase# source /etc/profile

root@ubuntu:/opt/bigdata-packages/hbase# vim hbase-2.0.6/conf/hbase-env.sh
# 添加如下配置
export JAVA_HOME=/opt/bigdata-packages/java/jdk1.8.0_231
export HBASE_HOME=/opt/bigdata-packages/hbase/hbase-2.0.6
export HBASE_CLASSPATH=/opt/bigdata-packages/hbase/hbase-2.0.6/lib/*
#HBASE_MANAGES_ZK=false是不启用HBase自带的Zookeeper集群。
export HBASE_MANAGES_ZK=false

root@ubuntu:/opt/bigdata-packages/hbase# vim hbase-2.0.6/conf/hbase-site.xml
<configuration>
        <property>
                <name>hbase.rootdir</name>
                <value>hdfs://ubuntu:9000/hbase</value>
                <description>The directory shared byregion servers.</description>
        </property>
        <!-- false是单机模式,true是分布式模式 -->
        <property>
                <name>hbase.cluster.distributed</name>
                <value>true</value>
        </property>
        <property>
                <name>hbase.tmp.dir</name>
                <value>/opt/bigdata-packages/hbase/hbase-data/tmp</value>
        </property>
        <property>
                <name>hbase.zookeeper.quorun</name>
                <value>ubuntu:2181</value>
        </property>
</configuration>

root@ubuntu:/opt/bigdata-packages/hbase# ./hbase-2.0.6/bin/start-hbase.sh 
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/bigdata-packages/hbase/hbase-2.0.6/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/bigdata-packages/hadoop/hadoop-3.2.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
running master, logging to /opt/bigdata-packages/hbase/hbase-2.0.6/logs/hbase-root-master-ubuntu.out
: running regionserver, logging to /opt/bigdata-packages/hbase/hbase-2.0.6/logs/hbase-root-regionserver-ubuntu.out

root@ubuntu:/opt/bigdata-packages/hbase# jps
67537 NodeManager
68675 Jps
67383 ResourceManager
66555 DataNode
68557 HRegionServer
66748 SecondaryNameNode
68415 HMaster
66399 NameNode
64463 QuorumPeerMain

  访问hbase ui:http://192.168.223.128:16030/rs-status
在这里插入图片描述
遇到的问题:HMaster进程一会就消失了。

root@ubuntu:/opt/bigdata-packages/hbase/hbase-2.0.6# jps
67537 NodeManager
71461 HRegionServer
67383 ResourceManager
66555 DataNode
66748 SecondaryNameNode
72044 Jps
66399 NameNode
64463 QuorumPeerMain

root@ubuntu:~# tail -f /opt/bigdata-packages/hbase/hbase-2.0.6/logs/hbase-root-master-ubuntu.log
2021-10-14 01:47:07,284 ERROR [Thread-14] master.HMaster: Failed to become active master
java.lang.IllegalStateException: The procedure WAL relies on the ability to hsync for proper operation during component failures, but the underlying filesystem does not support doing so. Please check the config value of 'hbase.procedure.store.wal.use.hsync' to set the desired level of robustness and ensure the config value of 'hbase.wal.dir' points to a FileSystem mount that can provide it.
	at org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.rollWriter(WALProcedureStore.java:1083)
	at org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.recoverLease(WALProcedureStore.java:421)
	at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.init(ProcedureExecutor.java:611)
	at org.apache.hadoop.hbase.master.HMaster.createProcedureExecutor(HMaster.java:1411)
	at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855)
	at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2227)
	at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:569)
	at java.lang.Thread.run(Thread.java:748)
2021-10-14 01:47:07,287 ERROR [Thread-14] master.HMaster: ***** ABORTING master ubuntu,16000,1634201210338: Unhandled exception. Starting shutdown. *****
java.lang.IllegalStateException: The procedure WAL relies on the ability to hsync for proper operation during component failures, but the underlying filesystem does not support doing so. Please check the config value of 'hbase.procedure.store.wal.use.hsync' to set the desired level of robustness and ensure the config value of 'hbase.wal.dir' points to a FileSystem mount that can provide it.
	at org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.rollWriter(WALProcedureStore.java:1083)
	at org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.recoverLease(WALProcedureStore.java:421)
	at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.init(ProcedureExecutor.java:611)
	at org.apache.hadoop.hbase.master.HMaster.createProcedureExecutor(HMaster.java:1411)
	at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855)
	at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2227)
	at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:569)
	at java.lang.Thread.run(Thread.java:748)

在这里插入图片描述
解决:在hbase-site.xml中添加如下配置后重启hbase

# 这个属性主要作用是禁止检查流功能(stream capabilities)
<property>
  <name>hbase.unsafe.stream.capability.enforce</name>
  <value>false</value>
</property>

参考:Hbase2.1.0启动失败解决方案积累

5.安装kafka:
root@ubuntu:/opt/bigdata-packages/kafka# tar -zxf kafka_2.11-2.0.0.tgz

root@ubuntu:/opt/bigdata-packages/kafka# vim kafka_2.11-2.0.0/config/server.properties 
# 可删除原有内容添加如下配置
broker.id=0
port=9092
log.dirs=/opt/bigdata-packages/kafka/kafka-data/logs
zookeeper.connect=127.0.0.1:2181

root@ubuntu:/opt/bigdata-packages/kafka# vim kafka_2.11-2.0.0/config/zookeeper.properties 
# 可删除原有内容添加如下配置
dataDir=/opt/software/kafka_2.12-2.7.0/zookeeper/dataDir
dataLogDir=/opt/software/kafka_2.12-2.7.0/zookeeper/dataLogDir
clientPort=2181
maxClientCnxns=100
admin.enableServer=false
tickTime=2000
initLimit=10

root@ubuntu:/opt/bigdata-packages/kafka# vim /etc/profile
# 添加如下配置
export KAFKA_HOME=/opt/bigdata-packages/kafka/kafka_2.11-2.0.0
export PATH=$PATH:$KAFKA_HOME/bin

root@ubuntu:/opt/bigdata-packages/kafka# source /etc/profile

root@ubuntu:/opt/bigdata-packages/kafka# nohup kafka_2.11-2.0.0/bin/kafka-server-start.sh /opt/bigdata-packages/kafka/kafka_2.11-2.0.0/config/server.properties &
..................................内容太多省略

root@ubuntu:/opt/bigdata-packages/kafka# jps
67537 NodeManager
90016 Kafka
89363 HMaster
67383 ResourceManager
91209 Jps
66555 DataNode
89514 HRegionServer
66748 SecondaryNameNode
66399 NameNode
64463 QuorumPeerMain

root@ubuntu:/opt/bigdata-packages/kafka/kafka_2.11-2.0.0/bin# kafka-topics.sh --list --zookeeper 192.168.223.128:2181
root@ubuntu:/opt/bigdata-packages/kafka/kafka_2.11-2.0.0/bin# kafka-topics.sh --zookeeper localhost:2181 --create --topic huiq --replication-factor 1 --partitions 1
Created topic "huiq".
root@ubuntu:/opt/bigdata-packages/kafka/kafka_2.11-2.0.0/bin# kafka-topics.sh --list --zookeeper 192.168.223.128:2181
huiq

遇到的问题1:
在这里插入图片描述
在这里插入图片描述
解决:将虚拟机的内存由2G调为3G
在这里插入图片描述
遇到的问题2:可以生产数据但是消费者无法消费数据,查看日志一直报以下错误

root@ubuntu:/opt/bigdata-packages/kafka# kafka-console-producer.sh --broker-list ubuntu:9092 --topic huiq
>afwef

root@ubuntu:/opt/bigdata-packages/kafka# kafka-console-consumer.sh --bootstrap-server ubuntu:9092 --topic huiq --from-beginning
。。。。。。一直没有反应

root@ubuntu:/opt/bigdata-packages/kafka/kafka_2.11-2.0.0/logs# tail -f server.log
[2021-10-18 01:09:43,121] ERROR [KafkaApi-0] Number of alive brokers '1' does not meet the required replication factor '3' for the offsets topic (configured via 'offsets.topic.replication.factor'). This error can be ignored if the cluster is starting up and not all brokers are up yet. (kafka.server.KafkaApis)
[2021-10-18 01:09:43,134] ERROR [KafkaApi-0] Number of alive brokers '1' does not meet the required replication factor '3' for the offsets topic (configured via 'offsets.topic.replication.factor'). This error can be ignored if the cluster is starting up and not all brokers are up yet. (kafka.server.KafkaApis)
[2021-10-18 01:09:43,135] ERROR [KafkaApi-0] Number of alive brokers '1' does not meet the required replication factor '3' for the offsets topic (configured via 'offsets.topic.replication.factor'). This error can be ignored if the cluster is starting up and not all brokers are up yet. (kafka.server.KafkaApis)

解决:

root@ubuntu:/opt/bigdata-packages/kafka# vim kafka_2.11-2.0.0/config/server.properties
# 添加如下配置
offsets.topic.replication.factor=1
6.安装hive:
root@ubuntu:/opt/bigdata-packages/hive# tar -zxf apache-hive-3.1.2-bin.tar.gz

# 安装MySQL8.0.26
root@ubuntu:/opt/bigdata-packages/hive# wget https://repo.mysql.com//mysql-apt-config_0.8.12-1_all.deb
--2021-10-14 18:19:55--  https://repo.mysql.com//mysql-apt-config_0.8.12-1_all.deb
Resolving repo.mysql.com (repo.mysql.com)... 23.56.185.130
Connecting to repo.mysql.com (repo.mysql.com)|23.56.185.130|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 36306 (35K) [application/x-debian-package]
Saving to: ‘mysql-apt-config_0.8.12-1_all.deb’

mysql-apt-config_0.8.12-1_all.deb                100%[========================================================================================================>]  35.46K   204KB/s    in 0.2s    

2021-10-14 18:19:56 (204 KB/s) - ‘mysql-apt-config_0.8.12-1_all.deb’ saved [36306/36306]

root@ubuntu:/opt/bigdata-packages/hive# sudo dpkg -i mysql-apt-config_0.8.12-1_all.deb
Selecting previously unselected package mysql-apt-config.
(Reading database ... 167025 files and directories currently installed.)
Preparing to unpack mysql-apt-config_0.8.12-1_all.deb ...
Unpacking mysql-apt-config (0.8.12-1) ...
Setting up mysql-apt-config (0.8.12-1) ...
# 会弹出下面的一张图,选择OK按钮继续
Warning: apt-key should not be used in scripts (called from postinst maintainerscript of the package mysql-apt-config)
OK

在这里插入图片描述

root@ubuntu:/opt/bigdata-packages/hive# sudo apt-get update
Get:1 http://repo.mysql.com/apt/ubuntu bionic InRelease [19.4 kB]
Hit:2 http://us.archive.ubuntu.com/ubuntu bionic InRelease                      
Get:5 http://us.archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]                                       
Get:10 http://us.archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]                                                                                     
Get:3 http://203.187.160.132:9011/repo.mysql.com/c3pr90ntc0td/apt/ubuntu bionic/mysql-8.0 Sources [967 B]                   
Get:12 http://us.archive.ubuntu.com/ubuntu bionic-updates/main i386 Packages [1,360 kB]                         
Get:6 http://203.187.160.131:9011/repo.mysql.com/c3pr90ntc0td/apt/ubuntu bionic/mysql-apt-config amd64 Packages [566 B]
Get:4 http://203.187.160.132:9011/repo.mysql.com/c3pr90ntc0td/apt/ubuntu bionic/mysql-apt-config i386 Packages [566 B]
Get:13 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]                                  
Get:7 http://203.187.160.132:9011/repo.mysql.com/c3pr90ntc0td/apt/ubuntu bionic/mysql-8.0 i386 Packages [8,327 B]  
Get:14 http://security.ubuntu.com/ubuntu bionic-security/main amd64 DEP-11 Metadata [50.4 kB]                           
Get:11 http://203.187.160.132:9011/repo.mysql.com/c3pr90ntc0td/apt/ubuntu bionic/mysql-tools i386 Packages [7,009 B]                                                                             
Get:8 http://203.187.160.131:9011/repo.mysql.com/c3pr90ntc0td/apt/ubuntu bionic/mysql-8.0 amd64 Packages [8,319 B]                                                                               
Get:15 http://us.archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [2,251 kB]                                                                                                         
Get:9 http://203.187.160.131:9011/repo.mysql.com/c3pr90ntc0td/apt/ubuntu bionic/mysql-tools amd64 Packages [6,998 B]                                                                             
Get:16 http://security.ubuntu.com/ubuntu bionic-security/universe amd64 DEP-11 Metadata [57.9 kB]                                                                                                
Get:17 http://security.ubuntu.com/ubuntu bionic-security/multiverse amd64 DEP-11 Metadata [2,464 B]                                                                                              
Get:18 http://us.archive.ubuntu.com/ubuntu bionic-updates/main amd64 DEP-11 Metadata [292 kB]                                                                                                    
Get:19 http://us.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 DEP-11 Metadata [299 kB]                                                                                                
Get:20 http://us.archive.ubuntu.com/ubuntu bionic-updates/universe DEP-11 48x48 Icons [226 kB]                                                                                                   
Get:21 http://us.archive.ubuntu.com/ubuntu bionic-updates/multiverse amd64 DEP-11 Metadata [2,468 B]                                                                                             
Get:22 http://us.archive.ubuntu.com/ubuntu bionic-backports/universe amd64 DEP-11 Metadata [9,268 B]                                                                                             
Fetched 4,855 kB in 21s (229 kB/s)                                                                                                                                                               
Reading package lists... Done

root@ubuntu:/opt/bigdata-packages/hive# sudo apt-get install mysql-server
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  libaio1 libmecab2 mecab-ipadic mecab-ipadic-utf8 mecab-utils mysql-client mysql-common mysql-community-client mysql-community-client-core mysql-community-client-plugins
  mysql-community-server mysql-community-server-core
The following NEW packages will be installed:
  libaio1 libmecab2 mecab-ipadic mecab-ipadic-utf8 mecab-utils mysql-client mysql-common mysql-community-client mysql-community-client-core mysql-community-client-plugins
  mysql-community-server mysql-community-server-core mysql-server
0 upgraded, 13 newly installed, 0 to remove and 1 not upgraded.
Need to get 38.5 MB of archives.
After this operation, 280 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://repo.mysql.com/apt/ubuntu bionic/mysql-8.0 amd64 mysql-common amd64 8.0.26-1ubuntu18.04 [68.7 kB]
Get:2 http://us.archive.ubuntu.com/ubuntu bionic-updates/main amd64 libaio1 amd64 0.3.110-5ubuntu0.1 [6,476 B]
Get:3 http://us.archive.ubuntu.com/ubuntu bionic/universe amd64 libmecab2 amd64 0.996-5 [257 kB]
Get:4 http://repo.mysql.com/apt/ubuntu bionic/mysql-8.0 amd64 mysql-community-client-plugins amd64 8.0.26-1ubuntu18.04 [1,105 kB]
Get:5 http://repo.mysql.com/apt/ubuntu bionic/mysql-8.0 amd64 mysql-community-client-core amd64 8.0.26-1ubuntu18.04 [1,690 kB]
Get:6 http://repo.mysql.com/apt/ubuntu bionic/mysql-8.0 amd64 mysql-community-client amd64 8.0.26-1ubuntu18.04 [2,799 kB]
Get:7 http://repo.mysql.com/apt/ubuntu bionic/mysql-8.0 amd64 mysql-client amd64 8.0.26-1ubuntu18.04 [65.0 kB]
Get:8 http://repo.mysql.com/apt/ubuntu bionic/mysql-8.0 amd64 mysql-community-server-core amd64 8.0.26-1ubuntu18.04 [20.3 MB]
Get:9 http://us.archive.ubuntu.com/ubuntu bionic/universe amd64 mecab-utils amd64 0.996-5 [4,856 B]                                                                                              
Get:10 http://us.archive.ubuntu.com/ubuntu bionic/universe amd64 mecab-ipadic all 2.7.0-20070801+main-1 [12.1 MB]                                                                                
Get:11 http://repo.mysql.com/apt/ubuntu bionic/mysql-8.0 amd64 mysql-community-server amd64 8.0.26-1ubuntu18.04 [76.3 kB]                                                                        
Get:12 http://repo.mysql.com/apt/ubuntu bionic/mysql-8.0 amd64 mysql-server amd64 8.0.26-1ubuntu18.04 [65.0 kB]                                                                                  
83% [10 mecab-ipadic 5,287 kB/12.1 MB 44%]                                                                                                                                      53.6 kB/s 2min 7s[2021-10-14 18:29:14,451] INFO [GroupMetadataManager brokerId=0] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
Get:13 http://us.archive.ubuntu.com/ubuntu bionic/universe amd64 mecab-ipadic-utf8 all 2.7.0-20070801+main-1 [3,522 B]                                                                           
Fetched 38.5 MB in 5min 57s (108 kB/s)                                                                                                                                                           
Preconfiguring packages ...

在这里插入图片描述

Selecting previously unselected package mysql-common.
(Reading database ... 167030 files and directories currently installed.)
Preparing to unpack .../00-mysql-common_8.0.26-1ubuntu18.04_amd64.deb ...
Unpacking mysql-common (8.0.26-1ubuntu18.04) ...
Selecting previously unselected package mysql-community-client-plugins.
Preparing to unpack .../01-mysql-community-client-plugins_8.0.26-1ubuntu18.04_amd64.deb ...
Unpacking mysql-community-client-plugins (8.0.26-1ubuntu18.04) ...
Selecting previously unselected package mysql-community-client-core.
Preparing to unpack .../02-mysql-community-client-core_8.0.26-1ubuntu18.04_amd64.deb ...
Unpacking mysql-community-client-core (8.0.26-1ubuntu18.04) ...
Selecting previously unselected package mysql-community-client.
Preparing to unpack .../03-mysql-community-client_8.0.26-1ubuntu18.04_amd64.deb ...
Unpacking mysql-community-client (8.0.26-1ubuntu18.04) ...
Selecting previously unselected package mysql-client.
Preparing to unpack .../04-mysql-client_8.0.26-1ubuntu18.04_amd64.deb ...
Unpacking mysql-client (8.0.26-1ubuntu18.04) ...
Selecting previously unselected package libaio1:amd64.
Preparing to unpack .../05-libaio1_0.3.110-5ubuntu0.1_amd64.deb ...
Unpacking libaio1:amd64 (0.3.110-5ubuntu0.1) ...
Selecting previously unselected package libmecab2:amd64.
Preparing to unpack .../06-libmecab2_0.996-5_amd64.deb ...
Unpacking libmecab2:amd64 (0.996-5) ...
Selecting previously unselected package mysql-community-server-core.
Preparing to unpack .../07-mysql-community-server-core_8.0.26-1ubuntu18.04_amd64.deb ...
Unpacking mysql-community-server-core (8.0.26-1ubuntu18.04) ...
Selecting previously unselected package mysql-community-server.
Preparing to unpack .../08-mysql-community-server_8.0.26-1ubuntu18.04_amd64.deb ...
Unpacking mysql-community-server (8.0.26-1ubuntu18.04) ...
Selecting previously unselected package mecab-utils.
Preparing to unpack .../09-mecab-utils_0.996-5_amd64.deb ...
Unpacking mecab-utils (0.996-5) ...
Selecting previously unselected package mecab-ipadic.
Preparing to unpack .../10-mecab-ipadic_2.7.0-20070801+main-1_all.deb ...
Unpacking mecab-ipadic (2.7.0-20070801+main-1) ...
Selecting previously unselected package mecab-ipadic-utf8.
Preparing to unpack .../11-mecab-ipadic-utf8_2.7.0-20070801+main-1_all.deb ...
Unpacking mecab-ipadic-utf8 (2.7.0-20070801+main-1) ...
Selecting previously unselected package mysql-server.
Preparing to unpack .../12-mysql-server_8.0.26-1ubuntu18.04_amd64.deb ...
Unpacking mysql-server (8.0.26-1ubuntu18.04) ...
Setting up mysql-common (8.0.26-1ubuntu18.04) ...
update-alternatives: using /etc/mysql/my.cnf.fallback to provide /etc/mysql/my.cnf (my.cnf) in auto mode
Setting up libmecab2:amd64 (0.996-5) ...
Setting up mysql-community-client-plugins (8.0.26-1ubuntu18.04) ...
Setting up libaio1:amd64 (0.3.110-5ubuntu0.1) ...
Setting up mecab-utils (0.996-5) ...
Setting up mecab-ipadic (2.7.0-20070801+main-1) ...
Compiling IPA dictionary for Mecab.  This takes long time...
reading /usr/share/mecab/dic/ipadic/unk.def ... 40
emitting double-array: 100% |###########################################| 
/usr/share/mecab/dic/ipadic/model.def is not found. skipped.
reading /usr/share/mecab/dic/ipadic/Postp.csv ... 146
reading /usr/share/mecab/dic/ipadic/Verb.csv ... 130750
reading /usr/share/mecab/dic/ipadic/Noun.place.csv ... 72999
reading /usr/share/mecab/dic/ipadic/Noun.nai.csv ... 42
reading /usr/share/mecab/dic/ipadic/Symbol.csv ... 208
reading /usr/share/mecab/dic/ipadic/Noun.demonst.csv ... 120
reading /usr/share/mecab/dic/ipadic/Others.csv ... 2
reading /usr/share/mecab/dic/ipadic/Prefix.csv ... 221
reading /usr/share/mecab/dic/ipadic/Noun.adjv.csv ... 3328
reading /usr/share/mecab/dic/ipadic/Noun.verbal.csv ... 12146
reading /usr/share/mecab/dic/ipadic/Adnominal.csv ... 135
reading /usr/share/mecab/dic/ipadic/Noun.name.csv ... 34202
reading /usr/share/mecab/dic/ipadic/Adverb.csv ... 3032
reading /usr/share/mecab/dic/ipadic/Suffix.csv ... 1393
reading /usr/share/mecab/dic/ipadic/Noun.proper.csv ... 27327
reading /usr/share/mecab/dic/ipadic/Noun.csv ... 60477
reading /usr/share/mecab/dic/ipadic/Auxil.csv ... 199
reading /usr/share/mecab/dic/ipadic/Filler.csv ... 19
reading /usr/share/mecab/dic/ipadic/Postp-col.csv ... 91
reading /usr/share/mecab/dic/ipadic/Conjunction.csv ... 171
reading /usr/share/mecab/dic/ipadic/Noun.org.csv ... 16668
reading /usr/share/mecab/dic/ipadic/Interjection.csv ... 252
reading /usr/share/mecab/dic/ipadic/Noun.adverbal.csv ... 795
reading /usr/share/mecab/dic/ipadic/Noun.others.csv ... 151
reading /usr/share/mecab/dic/ipadic/Noun.number.csv ... 42
reading /usr/share/mecab/dic/ipadic/Adj.csv ... 27210
emitting double-array: 100% |###########################################| 
reading /usr/share/mecab/dic/ipadic/matrix.def ... 1316x1316
emitting matrix      : 100% |###########################################| 

done!
update-alternatives: using /var/lib/mecab/dic/ipadic to provide /var/lib/mecab/dic/debian (mecab-dictionary) in auto mode
Setting up mysql-community-client-core (8.0.26-1ubuntu18.04) ...
Setting up mysql-community-server-core (8.0.26-1ubuntu18.04) ...
Setting up mecab-ipadic-utf8 (2.7.0-20070801+main-1) ...
Compiling IPA dictionary for Mecab.  This takes long time...
reading /usr/share/mecab/dic/ipadic/unk.def ... 40
emitting double-array: 100% |###########################################| 
/usr/share/mecab/dic/ipadic/model.def is not found. skipped.
reading /usr/share/mecab/dic/ipadic/Postp.csv ... 146
reading /usr/share/mecab/dic/ipadic/Verb.csv ... 130750
reading /usr/share/mecab/dic/ipadic/Noun.place.csv ... 72999
reading /usr/share/mecab/dic/ipadic/Noun.nai.csv ... 42
reading /usr/share/mecab/dic/ipadic/Symbol.csv ... 208
reading /usr/share/mecab/dic/ipadic/Noun.demonst.csv ... 120
reading /usr/share/mecab/dic/ipadic/Others.csv ... 2
reading /usr/share/mecab/dic/ipadic/Prefix.csv ... 221
reading /usr/share/mecab/dic/ipadic/Noun.adjv.csv ... 3328
reading /usr/share/mecab/dic/ipadic/Noun.verbal.csv ... 12146
reading /usr/share/mecab/dic/ipadic/Adnominal.csv ... 135
reading /usr/share/mecab/dic/ipadic/Noun.name.csv ... 34202
reading /usr/share/mecab/dic/ipadic/Adverb.csv ... 3032
reading /usr/share/mecab/dic/ipadic/Suffix.csv ... 1393
reading /usr/share/mecab/dic/ipadic/Noun.proper.csv ... 27327
reading /usr/share/mecab/dic/ipadic/Noun.csv ... 60477
reading /usr/share/mecab/dic/ipadic/Auxil.csv ... 199
reading /usr/share/mecab/dic/ipadic/Filler.csv ... 19
reading /usr/share/mecab/dic/ipadic/Postp-col.csv ... 91
reading /usr/share/mecab/dic/ipadic/Conjunction.csv ... 171
reading /usr/share/mecab/dic/ipadic/Noun.org.csv ... 16668
reading /usr/share/mecab/dic/ipadic/Interjection.csv ... 252
reading /usr/share/mecab/dic/ipadic/Noun.adverbal.csv ... 795
reading /usr/share/mecab/dic/ipadic/Noun.others.csv ... 151
reading /usr/share/mecab/dic/ipadic/Noun.number.csv ... 42
reading /usr/share/mecab/dic/ipadic/Adj.csv ... 27210
emitting double-array: 100% |###########################################| 
reading /usr/share/mecab/dic/ipadic/matrix.def ... 1316x1316
emitting matrix      : 100% |###########################################| 

done!
update-alternatives: using /var/lib/mecab/dic/ipadic-utf8 to provide /var/lib/mecab/dic/debian (mecab-dictionary) in auto mode
Setting up mysql-community-client (8.0.26-1ubuntu18.04) ...
Setting up mysql-client (8.0.26-1ubuntu18.04) ...
Setting up mysql-community-server (8.0.26-1ubuntu18.04) ...
update-alternatives: using /etc/mysql/mysql.cnf to provide /etc/mysql/my.cnf (my.cnf) in auto mode
Created symlink /etc/systemd/system/multi-user.target.wants/mysql.service → /lib/systemd/system/mysql.service.
Setting up mysql-server (8.0.26-1ubuntu18.04) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Processing triggers for libc-bin (2.27-3ubuntu1.4) ...

root@ubuntu:/opt/bigdata-packages/kafka# sudo netstat -anp | grep mysql
tcp6       0      0 :::33060                :::*                    LISTEN      92954/mysqld        
tcp6       0      0 :::3306                 :::*                    LISTEN      92954/mysqld        
unix  2      [ ACC ]     STREAM     LISTENING     700620   92954/mysqld         /var/run/mysqld/mysqlx.sock
unix  2      [ ACC ]     STREAM     LISTENING     700623   92954/mysqld         /var/run/mysqld/mysqld.sock
unix  2      [ ]         DGRAM                    700599   92954/mysqld         

root@ubuntu:/opt/bigdata-packages/kafka# mysql -h 127.0.0.1 -P 3306 -uroot -p123456
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 8
Server version: 8.0.26 MySQL Community Server - GPL

Copyright (c) 2000, 2021, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
4 rows in set (0.00 sec)

  通过这种方式安装好之后开机自启动都已经配置好,和命令行上的环境变量,无需手动配置。安装好之后会创建如下目录:

  • 数据库目录:/var/lib/mysql/
  • 配置文件:/usr/share/mysql(命令及配置文件) ,/etc/mysql(如:my.cnf)
  • 相关命令:/usr/bin(mysqladmin mysqldump等命令) 和/usr/sbin
  • 启动脚本:/etc/init.d/mysql(启动脚本文件mysql的目录)
#启动
sudo service mysql start
#停止
sudo service mysql stop
#服务状态
sudo service mysql status

  配置远程访问mysql:

# 创建账户
mysql> create user 'root'@'%' identified by '123456';
Query OK, 0 rows affected (0.15 sec)

# 赋予权限,with grant option这个选项表示该用户可以将自己拥有的权限授权给别人
mysql> grant all privileges on *.* to 'root'@'%' with grant option;
Query OK, 0 rows affected (0.00 sec)

# flush privileges 命令本质上的作用是将当前user和privilige表中的用户信息/权限设置从mysql库(MySQL数据库的内置库)中提取到内存里
mysql> flush privileges;
Query OK, 0 rows affected (0.01 sec)

遇到的问题:

mysql> grant all ON *.* to root@'%' identified by '123456' with grant option;
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'identified by '123456' with grant option' at line 1

解决:mysql版本8.0.26,在给新用户授权时,发生了变化。上面那个命令在5.7版本中好使。
参考:mysql版本:'for the right syntax to use near ‘identified by ‘password’ with grant option’

# 创建hive用户
mysql> CREATE DATABASE hive;
Query OK, 1 row affected (0.03 sec)

mysql> use hive;
Database changed
mysql> CREATE USER 'hive'@'%' IDENTIFIED BY 'hive';
Query OK, 0 rows affected (0.01 sec)

mysql> GRANT ALL PRIVILEGES ON *.* TO 'hive'@'%';
Query OK, 0 rows affected (0.00 sec)

mysql> CREATE USER 'hive'@'localhost' IDENTIFIED BY 'hive';
Query OK, 0 rows affected (0.01 sec)

mysql> GRANT ALL PRIVILEGES ON *.* TO 'hive'@'localhost';
Query OK, 0 rows affected (0.02 sec)

mysql> CREATE USER 'hive'@'master' IDENTIFIED BY 'hive';
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT ALL PRIVILEGES ON *.* TO 'hive'@'master';
Query OK, 0 rows affected (0.00 sec)

mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)


root@ubuntu:/opt/bigdata-packages/hive/apache-hive-3.1.2-bin# vim conf/hive-site.xml
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
        <property>
                <name>javax.jdo.option.ConnectionURL</name>
                <value>jdbc:mysql://ubuntu:3306/hive?createDatabaseIfNotExist=true</value>
        </property>
        
        <property>
                <name>javax.jdo.option.ConnectionUserName</name>
                <value>root</value>
        </property>
        
        <property>
                <name>javax.jdo.option.ConnectionPassword</name>
                <value>123456</value>
        </property>
        
        <property>
                <name>javax.jdo.option.ConnectionDriverName</name>
                <value>com.mysql.cj.jdbc.Driver</value>
        </property>
        
        <property>
                <name>datanucleus.schema.autoCreateAll</name>
                <value>true</value> 
        </property>
        
        <property>
                <name>hive.metastore.schema.verification</name>
                <value>false</value>
        </property>
</configuration>

root@ubuntu:/opt/bigdata-packages/hive/apache-hive-3.1.2-bin# vim conf/hive-env.sh
export HADOOP_HOME=/opt/bigdata-packages/hadoop/hadoop-3.2.2
export HIVE_CONF_DIR=/opt/bigdata-packages/hive/apache-hive-3.1.2-bin/conf

# 拷贝mysql驱动到hive中
sftp:/opt/bigdata-packages/hive/apache-hive-3.1.2-bin/lib> put D:\Users\.m2\repository\mysql\mysql-connector-java\8.0.26\mysql-connector-java-8.0.26.jar
Uploading mysql-connector-java-8.0.26.jar to remote:/opt/bigdata-packages/hive/apache-hive-3.1.2-bin/lib/mysql-connector-java-8.0.26.jar
sftp: sent 2.34 MB in 0.09 seconds

root@ubuntu:/opt/bigdata-packages/hive/apache-hive-3.1.2-bin# vim /etc/profile
export HIVE_HOME=/opt/bigdata-packages/hive/apache-hive-3.1.2-bin
export PATH=$PATH:$HIVE_HOME/bin
root@ubuntu:/opt/bigdata-packages/hive/apache-hive-3.1.2-bin# source /etc/profile

# 初始化数据库
root@ubuntu:/opt/bigdata-packages/hive/apache-hive-3.1.2-bin# schematool -initSchema -dbType mysql
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/bigdata-packages/hive/apache-hive-3.1.2-bin/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/bigdata-packages/hadoop/hadoop-3.2.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Metastore connection URL:	 jdbc:mysql://ubuntu:3306/hive?createDatabaseIfNotExist=true
Metastore Connection Driver :	 com.mysql.cj.jdbc.Driver
Metastore connection User:	 root
Starting metastore schema initialization to 3.1.0
Initialization script hive-schema-3.1.0.mysql.sql

Initialization script completed
schemaTool completed


root@ubuntu:/opt/bigdata-packages/hive/apache-hive-3.1.2-bin# hive
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/bigdata-packages/hbase/hbase-2.0.6/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/bigdata-packages/hadoop/hadoop-3.2.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/bigdata-packages/hive/apache-hive-3.1.2-bin/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/bigdata-packages/hadoop/hadoop-3.2.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Hive Session ID = 559a8ad5-d98c-4094-9541-35e64cec84eb

Logging initialized using configuration in jar:file:/opt/bigdata-packages/hive/apache-hive-3.1.2-bin/lib/hive-common-3.1.2.jar!/hive-log4j2.properties Async: true
Hive Session ID = f35d3e21-5e89-4d69-be25-32a4a081bacb
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
hive> show databases;
OK
default
Time taken: 1.264 seconds, Fetched: 1 row(s)

遇到的问题1:Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V
在这里插入图片描述
错误原因:hive和hadoop的lib下面的guava.jar版本不一致造成的。

解决:保持都为高版本的。

root@ubuntu:/opt/bigdata-packages/hive# cp ../hadoop/hadoop-3.2.2/share/hadoop/common/lib/guava-27.0-jre.jar apache-hive-3.1.2-bin/lib/
root@ubuntu:/opt/bigdata-packages/hive# rm apache-hive-3.1.2-bin/lib/guava-19.0.jar

遇到的问题2:Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster
在这里插入图片描述
在这里插入图片描述
解决:

root@ubuntu:/opt/bigdata-packages/hive/apache-hive-3.1.2-bin# vim $HADOOP_HOME/etc/hadoop/mapred-site.xml
# 添加如下配置
        <property>
                <name>yarn.app.mapreduce.am.env</name>
                <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
        </property>
        <property>
                <name>mapreduce.map.env</name>
                <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
        </property>
        <property>
                <name>mapreduce.reduce.env</name>
                <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
        </property>

遇到的问题3:使用beeline连接hive时报错:Could not open connection to the HS2 server. Please check the server URI and if the URI is correct, then ask the administrator to check the server status.
在这里插入图片描述
解决:Could not open connection to the HS2 server解决方案

又遇到问题:Error: Could not open client transport with JDBC Uri: jdbc:hive2://ubuntu:10000/default: Failed to open new session: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: root is not allowed to impersonate root (state=08S01,code=0)
在这里插入图片描述
解决:

root@ubuntu:/opt/bigdata-packages/hadoop/hadoop-3.2.2/etc/hadoop# vim core-site.xml
# 添加如下配置
        <property>
                <name>hadoop.proxyuser.root.hosts</name>
                <value>*</value>
        </property>
        <property>
                <name>hadoop.proxyuser.root.groups</name>
                <value>*</value>
        </property>

root@ubuntu:/opt/bigdata-packages/hadoop/hadoop-3.2.2/sbin# ./stop-all.sh
root@ubuntu:/opt/bigdata-packages/hadoop/hadoop-3.2.2/sbin# ./start-all.sh

遇到的问题4:在插入数据的时候等待好长时间并返回FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.StatsTask,但是表中却能查到这条数据。
在这里插入图片描述
解决:参考:hive insert return code 1 from org.apache.hadoop.hive.ql.exec.StatsTask

hive> set hive.stats.column.autogather;
hive.stats.column.autogather=true
hive> set hive.stats.column.autogather=false;
hive> INSERT INTO TABLE huiq_test VALUES(2,"huiq2");
Query ID = root_20211019233912_3a608916-c961-4140-b991-73485d326c0e
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1634711397285_0002, Tracking URL = http://ubuntu:8088/proxy/application_1634711397285_0002/
Kill Command = /opt/bigdata-packages/hadoop/hadoop-3.2.2/bin/mapred job  -kill job_1634711397285_0002
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2021-10-19 23:39:24,158 Stage-1 map = 0%,  reduce = 0%
2021-10-19 23:39:31,908 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.86 sec
MapReduce Total cumulative CPU time: 1 seconds 860 msec
Ended Job = job_1634711397285_0002
Stage-4 is selected by condition resolver.
Stage-3 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.
Moving data to directory hdfs://ubuntu:9000/user/hive/warehouse/huiq.db/huiq_test/.hive-staging_hive_2021-10-19_23-39-12_193_4142870039936525660-1/-ext-10000
Loading data to table huiq.huiq_test
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1   Cumulative CPU: 1.86 sec   HDFS Read: 5375 HDFS Write: 78 SUCCESS
Total MapReduce CPU Time Spent: 1 seconds 860 msec
OK
Time taken: 23.153 seconds
7.安装es:
root@ubuntu:/opt/bigdata-packages/elasticsearch# tar -zxf elasticsearch-7.14.1-linux-x86_64.tar.gz

  Elasticsearch5.0版本后不支持与logstash和Kibana2.x版本的混用,且安全级别的提升,使得Elasticsearch在后续的版本中不允许使用root用户启动,因此我们需要创建另外独立账户专供Elasticsearch使用。并且需要在root权限下将该特定环境准备好。

# 创建用户组 es
root@ubuntu:/opt/bigdata-packages/elasticsearch# addgroup es
Adding group `es' (GID 1001) ...
Done.

# 创建用户 es
root@ubuntu:/opt/bigdata-packages/elasticsearch# adduser es
Adding user `es' ...
Adding new group `es' (1001) ...
Adding new user `es' (1001) with group `es' ...
Creating home directory `/home/es' ...
Copying files from `/etc/skel' ...
Enter new UNIX password: 
Retype new UNIX password: 
passwd: password updated successfully
Changing the user information for es
Enter the new value, or press ENTER for the default
	Full Name []: 
	Room Number []: 
	Work Phone []: 
	Home Phone []: 
	Other []: 
Is the information correct? [Y/n] y

# 为该用户添加管理员权限(vim /etc/sudoers也可以),添加如下配置
root@ubuntu:/opt/bigdata-packages/elasticsearch# vim /etc/sudoers
es ALL=(ALL) ALL

# 让 es 用户拥有对 elasticsearch 的执行权限
root@ubuntu:/opt/bigdata-packages/elasticsearch# chown -R es:es /opt/bigdata-packages/elasticsearch/elasticsearch-7.14.1

root@ubuntu:/opt/bigdata-packages/elasticsearch# vim /etc/sysctl.conf
vm.max_map_count=655360
fs.file-max=65536

root@ubuntu:/opt/bigdata-packages/elasticsearch# source /etc/profile
root@ubuntu:/opt/bigdata-packages/elasticsearch# sudo sysctl -p
vm.max_map_count = 655360
fs.file-max = 65536

root@ubuntu:/opt/bigdata-packages/elasticsearch# vim /etc/profile
export ES_HOME=/opt/bigdata-packages/elasticsearch/elasticsearch-7.14.1
export PATH=$PATH:$ES_HOME/bin
root@ubuntu:/opt/bigdata-packages/elasticsearch# source /etc/profile

# 用es用户启动elasticsearch的时候会操作elasticsearch目录,所以需要有权限
root@ubuntu:/opt/bigdata-packages# chown -R es.es elasticsearch/

# 切换到 es 用户,编辑配置文件,准备启动Elasticsearch
root@ubuntu:/opt/bigdata-packages/elasticsearch# su - es

es@ubuntu:/opt/bigdata-packages/elasticsearch$ vim elasticsearch-7.14.1/config/elasticsearch.yml
# 添加如下配置
cluster.name: my-application
node.name: node-1
path.data: /opt/bigdata-packages/elasticsearch/elastic/data
path.logs: /opt/bigdata-packages/elasticsearch/elastic/logs
network.host: 0.0.0.0
http.port: 9200
cluster.initial_master_nodes: ["node-1"]

es@ubuntu:/opt/bigdata-packages/elasticsearch$ elasticsearch-7.14.1/bin/elasticsearch
。。。。。。。。。。。。。。。。。。。太多省略

root@ubuntu:/opt/bigdata-packages/Elasticsearch# free -h
              total        used        free      shared  buff/cache   available
Mem:           2.9G        1.2G        1.3G        1.4M        402M        1.7G
Swap:          947M        704M        243M

# 由于Elasticsearch启动太吃内存,于是把kafka、hbase、hadoop都停止了剩余1.3G的内存才能启动成功,内存剩余1.0G的时候都没有成功
root@ubuntu:/opt/bigdata-packages/Elasticsearch# jps
100609 Jps
100423 Elasticsearch
64463 QuorumPeerMain

  访问当前ip的9200端口,出现下图所示内容即安装成功:http://192.168.223.128:9200/
在这里插入图片描述

8.安装solr:
root@ubuntu:/opt/bigdata-packages/solr# tar -zxf solr-8.9.0.tgz

root@ubuntu:/opt/bigdata-packages/solr# vim solr-8.9.0/bin/solr.in.sh
ZK_HOST="ubuntu:2181"
SOLR_HOST="ubuntu"
SOLR_HOME=/opt/bigdata-packages/solr/solr-8.9.0/server/solr
SOLR_RECOMMENDED_OPEN_FILES=65000
SOLR_RECOMMENDED_MAX_PROCESSES=65000
SOLR_PORT=8983

root@ubuntu:/opt/bigdata-packages/solr# vim /etc/profile
export SOLR_HOME=/opt/bigdata-packages/solr/solr-8.9.0
export PATH=$PATH:$SOLR_HOME/bin
root@ubuntu:/opt/bigdata-packages/solr# source /etc/profile

root@ubuntu:/opt/bigdata-packages/solr/solr-8.9.0# bin/solr start -force
*** [WARN] *** Your open file limit is currently 1024.  
 It should be set to 65000 to avoid operational disruption. 
 If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in your profile or solr.in.sh
*** [WARN] ***  Your Max Processes Limit is currently 7650. 
 It should be set to 65000 to avoid operational disruption. 
 If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in your profile or solr.in.sh
Waiting up to 180 seconds to see Solr running on port 8983 [/]  
Started Solr server on port 8983 (pid=103729). Happy searching!

  访问solr:http://192.168.223.128:8983/
在这里插入图片描述

9.安装maven:
root@ubuntu:/opt/bigdata-packages/maven# wget http://mirrors.cnnic.cn/apache/maven/maven-3/3.5.4/binaries/apache-maven-3.5.4-bin.tar.gz
--2021-10-15 00:25:59--  http://mirrors.cnnic.cn/apache/maven/maven-3/3.5.4/binaries/apache-maven-3.5.4-bin.tar.gz
Resolving mirrors.cnnic.cn (mirrors.cnnic.cn)... 101.6.15.130, 2402:f000:1:400::2
Connecting to mirrors.cnnic.cn (mirrors.cnnic.cn)|101.6.15.130|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 8842660 (8.4M) [application/octet-stream]
Saving to: ‘apache-maven-3.5.4-bin.tar.gz’

apache-maven-3.5.4-bin.tar.gz                    100%[========================================================================================================>]   8.43M  9.94MB/s    in 0.8s    

2021-10-15 00:26:00 (9.94 MB/s) - ‘apache-maven-3.5.4-bin.tar.gz’ saved [8842660/8842660]

root@ubuntu:/opt/bigdata-packages/maven# tar -zxvf apache-maven-3.5.4-bin.tar.gz

root@ubuntu:/opt/bigdata-packages/maven# vim /etc/profile
# 添加如下配置
export MAVEN_HOME=/opt/bigdata-packages/maven/apache-maven-3.5.4
export PATH=$MAVEN_HOME/bin:$PATH
root@ubuntu:/opt/bigdata-packages/maven# source /etc/profile

root@ubuntu:/opt/bigdata-packages/maven# mvn -version
Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T11:33:14-07:00)
Maven home: /opt/bigdata-packages/maven/apache-maven-3.5.4
Java version: 1.8.0_231, vendor: Oracle Corporation, runtime: /opt/bigdata-packages/java/jdk1.8.0_231/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "4.15.0-20-generic", arch: "amd64", family: "unix"
10.安装 Python 2.7.14:
root@ubuntu:/opt/bigdata-packages/python# apt-get update
root@ubuntu:/opt/bigdata-packages/python# apt-get install build-essential checkinstall
root@ubuntu:/opt/bigdata-packages/python# apt-get install libreadline-gplv2-dev libncursesw5-dev libssl-dev libsqlite3-dev tk-dev libgdbm-dev libc6-dev libbz2-dev

root@ubuntu:/opt/bigdata-packages/python# wget https://www.python.org/ftp/python/2.7.14/Python-2.7.14.tgz
--2021-10-17 20:26:07--  https://www.python.org/ftp/python/2.7.14/Python-2.7.14.tgz
Resolving www.python.org (www.python.org)... 151.101.108.223, 2a04:4e42:1a::223
Connecting to www.python.org (www.python.org)|151.101.108.223|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 17176758 (16M) [application/octet-stream]
Saving to: ‘Python-2.7.14.tgz’

Python-2.7.14.tgz                                100%[========================================================================================================>]  16.38M  13.9KB/s    in 25m 38s 

2021-10-17 20:51:48 (10.9 KB/s) - ‘Python-2.7.14.tgz’ saved [17176758/17176758]

root@ubuntu:/opt/bigdata-packages/python# tar xzf Python-2.7.14.tgz
root@ubuntu:/opt/bigdata-packages/python# cd Python-2.7.14
root@ubuntu:/opt/bigdata-packages/python/Python-2.7.14# ./configure
root@ubuntu:/opt/bigdata-packages/python/Python-2.7.14# make altinstall

root@ubuntu:/opt/bigdata-packages/python# python2.7 -V
Python 2.7.14
11.安装 Tez:

  在执行创建表语句的时候居然花费了8个小时,目前还不知道是什么原因:

hive> CREATE TABLE IF NOT EXISTS huiq_test (id BIGINT, name STRING);
OK
Time taken: 28806.741 seconds

# 当结束任务的时候查看hive.log日志报错:java.lang.ClassNotFoundException: org.apache.tez.dag.api.DAG
hive> CREATE TABLE IF NOT EXISTS huiq_test1 (id BIGINT, name STRING);
Interrupting... Be patient, this might take some time.
Press Ctrl+C again to kill JVM
Exiting the JVM
root@ubuntu:/opt/bigdata-packages/hbase/hbase-2.0.6/bin# 
2021-10-20T04:53:55,721 ERROR [SIGINT handler] tez.TezJobExecHelper: Error getting tez method
java.lang.ClassNotFoundException: org.apache.tez.dag.api.DAG
	at java.net.URLClassLoader.findClass(URLClassLoader.java:382) ~[?:1.8.0_231]
	at java.lang.ClassLoader.loadClass(ClassLoader.java:418) ~[?:1.8.0_231]
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355) ~[?:1.8.0_231]
	at java.lang.ClassLoader.loadClass(ClassLoader.java:351) ~[?:1.8.0_231]
	at java.lang.Class.forName0(Native Method) ~[?:1.8.0_231]
	at java.lang.Class.forName(Class.java:264) ~[?:1.8.0_231]
	at org.apache.hadoop.hive.ql.exec.tez.TezJobExecHelper.<clinit>(TezJobExecHelper.java:40) ~[hive-exec-3.1.2.jar:3.1.2]
	at org.apache.hadoop.hive.cli.CliDriver$3.handle(CliDriver.java:377) ~[hive-cli-3.1.2.jar:3.1.2]
	at sun.misc.Signal$1.run(Signal.java:212) ~[?:1.8.0_231]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_231]
2021-10-20T04:53:55,730  WARN [SIGINT handler] tez.TezJobExecHelper: Unable to find tez method for killing jobs

  经了解,Hive 2以上版本已经将Hive on MR视为废弃, 将来版本可能会移除, 现在用spark或tez结合hive的会多一些。现在企业主流使用的hive还是1.x, 部分企业逐渐向2.3版本靠拢, 3.1.2 确实还是太新了(这句话来自基于Hadoop3.1.2集群的Hive3.1.2安装(有不少坑))。
在这里插入图片描述
  在官网可以看到,Hadoop3.X版本要使用Tez引擎是需要自己编译的(对于0.8.3和更高版本的Tez,Tez需要Apache Hadoop的版本为2.6.0或更高。对于0.9.0及更高版本的Tez,Tez需要Apache Hadoop为2.7.0或更高版本。)于是想安装Tez。

参考:Hive3.1.2+大数据引擎Tez0.9.2安装部署到使用测试(踩坑详情)

源码包下载地址:apache-tez-0.9.2-src.tar.gz

(1)编译:
[root@node01 huiq]# tar -zxvf apache-tez-0.9.2-src.tar.gz
[root@node01 apache-tez-0.9.2-src]# vim pom.xml
# 将以下配置进行如下修改
# 你所安装的Hadoop版本
<hadoop.version>3.2.2</hadoop.version>

# 和前面的guava版本一致
       <dependency>
         <groupId>com.google.guava</groupId>
         <artifactId>guava</artifactId>
         <version>19.0</version>
       </dependency>

# 在tez编译时,tez-ui这个模块是耗时耗力不讨好,而且没啥用,所以我们可以直接跳过
<!--<module>tez-ui</module>-->
[root@node01 apache-tez-0.9.2-src]# mvn clean package -DskipTests=true -Dmaven.javadoc.skip=true

在这里插入图片描述
在这里插入图片描述
  后面还需要hadoop-lzo-0.4.20.jar,安装Lzo则也需要本地编译:参考:Apache Hadoop3.1.3编译安装部署lzo压缩指南(照做就可以,别落一步)

# 下载、安装并编译LZO
[root@node01 huiq]# wget http://www.oberhumer.com/opensource/lzo/download/lzo-2.10.tar.gz
[root@node01 huiq]# tar -zxvf lzo-2.10.tar.gz
[root@node01 huiq]# cd lzo-2.10
[root@node01 lzo-2.10]# ./configure -prefix=/usr/local/hadoop/lzo/
[root@node01 lzo-2.10]# make
[root@node01 lzo-2.10]# make install

# 下载hadoop-lzo的源码,访问github有时可能会失败
[root@node01 huiq]# wget https://github.com/twitter/hadoop-lzo/archive/master.zip
[root@node01 huiq]# unzip master.zip
[root@node01 huiq]# cd hadoop-lzo-master/

# 修改pom.xml
[root@node01 hadoop-lzo-master]# vim pom.xml
<hadoop.current.version>3.2.2</hadoop.current.version>

# 声明两个临时环境变量
[root@node01 hadoop-lzo-master]# export C_INCLUDE_PATH=/usr/local/hadoop/lzo/include
[root@node01 hadoop-lzo-master]# export LIBRARY_PATH=/usr/local/hadoop/lzo/lib

# 执行maven编译命令
[root@node01 hadoop-lzo-master]# mvn package -Dmaven.test.skip=true

在这里插入图片描述
解决:[root@node01 hadoop-lzo-master]# yum -y install gcc-c++ lzo-devel zlib-devel autoconf automake libtool
在这里插入图片描述
注意:所依赖的环境最好先安装好,可能还需要git,这台机器以前应该已经安装好了。
在这里插入图片描述
  进入target,hadoop-lzo-0.4.21-SNAPSHOT.jar 即编译成功的hadoop-lzo组件。将编译好后的hadoop-lzo-0.4.20.jar 放入hadoop-3.2.2/share/hadoop/common/。编辑core-site.xml增加配置支持LZO压缩后重启Hadoop:

root@ubuntu:/opt/bigdata-packages/hadoop/hadoop-3.2.2/etc/hadoop# vim core-site.xml
        <property>
                <name>io.compression.codecs</name>
                <value>
                        org.apache.hadoop.io.compress.GzipCodec,
                        org.apache.hadoop.io.compress.DefaultCodec,
                        org.apache.hadoop.io.compress.BZip2Codec,
                        org.apache.hadoop.io.compress.SnappyCodec,
                        com.hadoop.compression.lzo.LzoCodec,
                        com.hadoop.compression.lzo.LzopCodec
                </value>
        </property>

        <property>
                <name>io.compression.codec.lzo.class</name>
                <value>com.hadoop.compression.lzo.LzoCodec</value>
        </property>
(2)为Hive配置Tez:
# 注意解压的是minimal
root@ubuntu:/opt/bigdata-packages/tez# tar -zxvf tez-0.9.2-minimal.tar.gz
# 上传tez依赖到HDFS(上传的是不带minimal的那个)
root@ubuntu:/opt/bigdata-packages/tez# hadoop fs -mkdir /tez
root@ubuntu:/opt/bigdata-packages/tez# hadoop fs -put tez-0.9.2.tar.gz /tez

# 新建tez-site.xml在$HADOOP_HOME/etc/hadoop/路径下(注意,不要放在hive/conf/目录下,不生效)
root@ubuntu:/opt/bigdata-packages/tez# cd $HADOOP_HOME/etc/hadoop/
root@ubuntu:/opt/bigdata-packages/hadoop/hadoop-3.2.2/etc/hadoop# vim tez-site.xml
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<!-- 注意你的路径以及文件名是否和我的一样 -->
<property>
        <name>tez.lib.uris</name>
    <value>${fs.defaultFS}/tez/tez-0.9.2.tar.gz</value>
</property>
<property>
     <name>tez.use.cluster.hadoop-libs</name>
     <value>true</value>
</property>
<property>
     <name>tez.am.resource.memory.mb</name>
     <value>1024</value>
</property>
<property>
     <name>tez.am.resource.cpu.vcores</name>
     <value>1</value>
</property>
<property>
     <name>tez.container.max.java.heap.fraction</name>
     <value>0.4</value>
</property>
<property>
     <name>tez.task.resource.memory.mb</name>
     <value>1024</value>
</property>
<property>
     <name>tez.task.resource.cpu.vcores</name>
     <value>1</value>
</property>
</configuration>

# 修改Hadoop环境变量,添加以下内容
root@ubuntu:/opt/bigdata-packages/hadoop/hadoop-3.2.2/etc/hadoop# vim shellprofile.d/example.sh
hadoop_add_profile tez
function _tez_hadoop_classpath
{
    hadoop_add_classpath "$HADOOP_HOME/etc/hadoop" after
    hadoop_add_classpath "/opt/bigdata-packages/tez/*" after
    hadoop_add_classpath "/opt/bigdata-packages/tez/lib/*" after
}

# 修改hive的计算引擎,添加以下内容
root@ubuntu:/opt/bigdata-packages/hadoop/hadoop-3.2.2/etc/hadoop# vim $HIVE_HOME/conf/hive-site.xml
<property>
    <name>hive.execution.engine</name>
    <value>tez</value>
</property>
<property>
    <name>hive.tez.container.size</name>
    <value>1024</value>
</property>

# 在hive-env.sh中添加tez的路径
root@ubuntu:/opt/bigdata-packages/hadoop/hadoop-3.2.2/etc/hadoop# vim $HIVE_HOME/conf/hive-env.sh
export TEZ_HOME=/opt/bigdata-packages/tez    #是你的tez的解压目录
export TEZ_JARS=""
for jar in `ls $TEZ_HOME |grep jar`; do
    export TEZ_JARS=$TEZ_JARS:$TEZ_HOME/$jar
done
for jar in `ls $TEZ_HOME/lib`; do
    export TEZ_JARS=$TEZ_JARS:$TEZ_HOME/lib/$jar
done

export HIVE_AUX_JARS_PATH=/opt/bigdata-packages/hadoop/hadoop-3.2.2/share/hadoop/common/hadoop-lzo-0.4.21-SNAPSHOT.jar$TEZ_JARS,/opt/bigdata-packages/atlas/apache-atlas-2.1.0/hook/hive/atlas-plugin-classloader-2.1.0.jar,/opt/bigdata-packages/atlas/apache-atlas-2.1.0/hook/hive/hive-bridge-shim-2.1.0.jar

# 解决日志Jar包冲突
root@ubuntu:/opt/bigdata-packages/tez# rm lib/slf4j-log4j12-1.7.10.jar

注意:上面的那个问题,虽然不想等八小时结束任务的时候查看hive.log(位置配置在了$HIVE_HOME/conf/hive-log4j2.properties文件中的property.hive.log.dir参数中)日志不报Tez那个错了,但是创建表语句还是要执行八小时之久,目前还不知道是什么原因,有遇到同样问题的朋友可以探讨一下。
  2022-03-29号在centos7.2中安装了这个版本的hive集群根本就没有出现这个问题。

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

小强签名设计

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值