大数据集群部署文档

大数据集群部署文档

注意:需配合大数据集群启动&检查文档进行部署,以便可以检验每一个组件是否部署成功。

一、部署前准备

1. 确保所有机器可以访问外网

系统安装时设置:

节点属性Your nameYour servers namePick a usernamepassword
主节点:Master-Ubuntumaster-ubuntuwckjwckj
第一从节点Slave1-Ubuntuslave1-ubuntuwckjwckj
第二从节点Slave2-Ubuntuslave2-ubuntuwckjwckj
IP地址机器名角色
192.168.2.14master-ubuntu主节点
192.168.2.15slave1-ubuntu第一从节点
192.168.2.16slave2-ubuntu第二从节点

除特殊说明外,下方所有操作需要在所有机器上都执行

2. 配置root用户ssh连接

设置root账户密码,wckj

sudo passwd root

输入当前账户密码–>输入root账户密码

切换为root账户

su
vi /etc/ssh/sshd_config

添加:

PermitRootLogin yes

重启sshd服务

systemctl restart sshd

3. 解决Vmware ESXi 6.5 Ubuntu虚拟机ssh连接后宕机问题

修改虚拟机vmx配置文件:

1.Power off the virtual machine
2.Edit the vmx file and add the below parameter:
vmxnet3.rev.30 = FALSE
3.Power on the virtual machine

vi /vmfs/volumes/663cf90a-fd36742f-0a51-0025905f0570/2.14\ Ubuntu-Master/2.14\ Ubuntu-Master.vmx
vi /vmfs/volumes/663cf90a-fd36742f-0a51-0025905f0570/2.15\ Ubuntu-Slave1/2.15\ Ubuntu-Slave1.vmx
vi /vmfs/volumes/663cf90a-fd36742f-0a51-0025905f0570/2.16\ Ubuntu-Slave1/2.16\ Ubuntu-Slave1.vmx

卸载Vmware Tools

apt-get remove open-vm-tools

4. 配置软件源为国内源

备份源

cp /etc/apt/sources.list /etc/apt/sources.list.bak

修改源

sudo bash -c "cat << EOF > /etc/apt/sources.list && apt update
>deb http://mirrors.163.com/ubuntu/ jammy main restricted universe multiverse
deb http://mirrors.163.com/ubuntu/ jammy-security main restricted universe multiverse
deb http://mirrors.163.com/ubuntu/ jammy-updates main restricted universe multiverse
deb http://mirrors.163.com/ubuntu/ jammy-proposed main restricted universe multiverse
deb http://mirrors.163.com/ubuntu/ jammy-backports main restricted universe multiverse
deb-src http://mirrors.163.com/ubuntu/ jammy main restricted universe multiverse
deb-src http://mirrors.163.com/ubuntu/ jammy-security main restricted universe multiverse
deb-src http://mirrors.163.com/ubuntu/ jammy-updates main restricted universe multiverse
deb-src http://mirrors.163.com/ubuntu/ jammy-proposed main restricted universe multiverse
deb-src http://mirrors.163.com/ubuntu/ jammy-backports main restricted universe multiverse
EOF"

5. 设置东八时区时间

sudo timedatectl set-timezone Asia/Shanghai

6. 关闭ufw防火墙

sudo ufw disable

7. Ubuntu关闭默认开启的自动休眠

① 查询自动休眠服务状态
sudo systemctl status sleep.target

若返回结果为已关闭则无需下方关闭操作

若返回为下方结果,则表示开启
○ sleep.target - Sleep
	Loaded: loaded (/lib/systemd/system/sleep.target; static)
	Active: inactive (dead)
	Docs: man:systemd.special(7)


若返回为下方结果,则表示关闭
○ sleep.target
	Loaded: masked (Reason: Unit sleep.target is masked.)
	Active: inactive (dead)
② 关闭自动休眠服务
sudo systemctl mask sleep.target suspend.target hibernate.target hybrid-sleep.target

# 返回结果:
Created symlink /etc/systemd/system/sleep.target → /dev/null.
Created symlink /etc/systemd/system/suspend.target → /dev/null.
Created symlink /etc/systemd/system/hibernate.target → /dev/null.
Created symlink /etc/systemd/system/hybrid-sleep.target → /dev/null.
③ 再次查询自动休眠服务状态,确保服务已关闭

8. 安装zip(主节点)

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install zip

9. python2安装部署(主节点,已存在python3的情况)

sudo apt-get update
sudo apt-get upgrade
1. 安装 Python2

可以到 Python 官网去下载安装包进行安装,也可以直接使用如下命令安装 Python2:

sudo apt install python2

安装完成后检查 Python 的版本:

python2 -V

一般 Ubuntu 是自带 python3 版本的:

python3 -V

安装完成后我们可以使用如下命令来检查目前可用的 Python 版本:

ls /usr/bin/python*
2. 设置默认方式(替代版本)

首先查看是否已经配置了 Python 的默认方式(替代版本):

sudo update-alternatives --list python

若没有设置,会显示:

update-alternatives: error: no alternatives for python

若设置了,则是显示你替代的版本,可以以此确认你的备选方案是否可用:

sudo update-alternatives --list python

# 返回结果:
/usr/bin/python2
/usr/bin/python3

然后使用如下命令设置默认方式(替代版本):

sudo update-alternatives --install /usr/bin/python python /usr/bin/python2 1
sudo update-alternatives --install /usr/bin/python python /usr/bin/python3 2

选用可选的 Python 版本:

sudo update-alternatives --config python

# 返回结果:
There are 2 choices for the alternative python (providing /usr/bin/python).
Selection    Path              Priority   Status
------------------------------------------------------------
* 0            /usr/bin/python3   2         auto mode
1            /usr/bin/python2   1         manual mode
2            /usr/bin/python3   2         manual mode
Press  to keep the current choice[*], or type selection number: 1

在本例中,选择 1 来选择 Python2
最后,你可以检查你的 Python 版本来确认是否设置成功:

python -V

# 返回结果:
Python 2.7.18

10. 安装部署MySQL(主节点)

确保你的系统的软件包列表是最新的

sudo apt-get update
sudo apt-get upgrade

查看可使用的MySQL安装包:

sudo apt search mysql-server

使用以下命令安装MySQL服务器:

sudo apt install mysql-server-8.0

安装完成后,MySQL服务会自动启动,未启动则使用以下命令启动MySQL服务:

sudo systemctl start mysql

并将MySQL设置为开机自启动:

sudo systemctl enable mysql

检查MySQL状态,出现active则为启动成功。

sudo systemctl status mysql

# 返回结果:
● mysql.service - MySQL Community Server
     Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled)
     Active: active (running) since Tue 2024-08-27 13:53:30 CST; 23s ago
   Main PID: 29635 (mysqld)
     Status: "Server is operational"
      Tasks: 38 (limit: 38428)
     Memory: 365.7M
        CPU: 1.218s
     CGroup: /system.slice/mysql.service
             └─29635 /usr/sbin/mysqld

Aug 27 13:53:28 master-ubuntu systemd[1]: Starting MySQL Community Server...
Aug 27 13:53:30 master-ubuntu systemd[1]: Started MySQL Community Server.

登录mysql,在默认安装时如果没有设置密码,则直接回车就能登录成功。

mysql -uroot -p

设置mysql密码

use mysql
mysql> ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'Wckj@123';
mysql> update user set host = '%' where user = 'root';

刷新缓存,并退出

mysql> flush privileges;
mysql> exit;

测试是否成功

mysql -uroot -pWckj@123

修改mysqld.cnf文件:

sudo vim /etc/mysql/mysql.conf.d/mysqld.cnf

修改 bind-address,保存后重启MySQL即可。

bind-address            = 0.0.0.0
mysqlx-bind-address     = 0.0.0.0

重启MySQL重新加载一下配置:

sudo systemctl restart mysql

检查MySQL状态,出现active则为启动成功。

sudo systemctl status mysql

# 返回结果:
● mysql.service - MySQL Community Server
     Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled)
     Active: active (running) since Tue 2024-08-27 13:57:02 CST; 6s ago
    Process: 29930 ExecStartPre=/usr/share/mysql/mysql-systemd-start pre (code=exited, status=0/SUCCESS)
   Main PID: 29938 (mysqld)
     Status: "Server is operational"
      Tasks: 38 (limit: 38428)
     Memory: 366.1M
        CPU: 1.104s
     CGroup: /system.slice/mysql.service
             └─29938 /usr/sbin/mysqld

Aug 27 13:57:00 master-ubuntu systemd[1]: Starting MySQL Community Server...
Aug 27 13:57:02 master-ubuntu systemd[1]: Started MySQL Community Server.

二、配置节点的IP映射及SSH免密

1. 配置节点的IP映射

每台设备分别配置hosts

sudo vi /etc/hosts

不更改IPV6配置,在文本开头,注释其他配置,并添加以下内容,:
格式:IP地址 机器名

192.168.2.14 master-ubuntu
192.168.2.15 slave1-ubuntu
192.168.2.16 slave2-ubuntu

主节点,master-ubuntu

192.168.2.14 master-ubuntu
192.168.2.15 slave1-ubuntu
192.168.2.16 slave2-ubuntu

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

第一从节点,slave1-ubuntu

192.168.2.14 master-ubuntu
192.168.2.15 slave1-ubuntu
192.168.2.16 slave2-ubuntu

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

第二从节点,slave2-ubuntu

192.168.2.14 master-ubuntu
192.168.2.15 slave1-ubuntu
192.168.2.16 slave2-ubuntu

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

2. 配置节点的SSH免密

配置服务器ssh免密登录,每台设备分别配置免密登录本机及其他服务器

① 本地客户端生成公私钥
sudo ssh-keygen

# 返回结果:
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa
Your public key has been saved in /root/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:cKqkj5WFOczsiCby/Zj3Y35SH4fUNbcUF7Lg/kfrw84 root@master-ubuntu
The key's randomart image is:
+---[RSA 3072]----+
|           . . o+|
|          . . oo+|
|      . .  . o..+|
|   + o +  . . .. |
|    O o S  o . . |
| . = =    . + o .|
|+.o =    . . +.o |
|+. = o. + . . +o |
|  o =o.+o+    .E.|
+----[SHA256]-----+

一直回车默认即可

② 检查公私钥创建成功与否
cd ~/.ssh
ls

# 返回结果:
authorized_keys  id_rsa  id_rsa.pub

id_rsa (私钥)
id_rsa.pub (公钥)

③ 每台机器分别上传公钥到本机及其他服务器

请求确认时,输入yes

sudo ssh-copy-id -i ~/.ssh/id_rsa.pub root@IP


# 返回结果:
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host '192.168.2.14 (192.168.2.14)' can't be established.
ED25519 key fingerprint is SHA256:sGPCH01tENJRbEgozgTtN49XUKwbrCJJ4YOqh19fcZY.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.2.14's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'root@192.168.2.14'"
and check to make sure that only the key(s) you wanted were added.

例:
sudo ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.2.14
sudo ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.2.15
sudo ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.2.16

④ 测试免密登录

分别在三台机器中使用IP和机器名对本机和另外两台机器进行免密登录测试
若能够进入免密登录目标机器的命令行,则表示免密登录成功
note:
1.首次使用机器名免密连接时,需要输入yes确认,后续则不再需要
2.连接成功后退出免密连接回到本机,继续建立新连接,不要在新连接内进行免密连接测试!!!!

ssh root@IP
ssh root@机器名

例:
sudo ssh root@192.168.2.14
sudo ssh root@master-ubuntu

使用ctrl+d 退出免密连接

logout

3. 准备集群部署所需安装包,并上传到soft目录内(主节点执行)

mkdir /opt/soft/
序号名称-版本文件名
1JDK-1.8jdk-8u381-linux-x64.tar.gz
2Scala-2.12.19scala-2.12.19.tgz
3Hadoop-3.3.0hadoop-3.3.0.tar.gz
4Zookeeper-3.7.2apache-zookeeper-3.7.2-bin.tar.gz
5Flink-1.13.6flink-1.13.6-bin-scala_2.11.tgz
6Kafka-2.12-3.6.0kafka_2.12-3.6.0.tgz
7Hive-3.1.3apache-hive-3.1.3-bin.tar.gz
8Hbase-2.5.6hbase-2.5.6-bin.tar.gz
9Solr-7.7.3solr-7.7.3.tgz
10Spark-3.4.0spark-3.4.0-bin-hadoop3.tgz
11Atlas-2.1.0apache-atlas-2.1.0-hive-hook.tar.gz、apache-atlas-2.1.0-server.tar.gz
12dataxdatax.tar.gz
13seatunnel-2.3.4apache-seatunnel-2.3.4-bin.tar.gz

三、配置Java环境

Java开发环境

  • JDK版本1.8

  • 安装配置

运行命令解压至指定路径下,并重命名文件夹:

tar -zxvf /opt/soft/jdk-8u381-linux-x64.tar.gz -C /opt/
mv /opt/jdk1.8.0_381 /opt/jdk1.8

配置环境变量:

vi /etc/profile

在文本末尾,添加以下内容:

export JAVA_HOME=/opt/jdk1.8
export PATH=$PATH:$JAVA_HOME/bin

添加完成后,运行命令使其立即生效

source /etc/profile
  • 验证

运行以下两个命令,观察是否有参数说明结果:

java
javac

发送配置到每个从节点,并在从节点执行命令使其生效

主节点执行,分别发送Jdk1.8到从节点

scp -r -p /opt/jdk1.8/ root@slave1-ubuntu:/opt/
scp -r -p /opt/jdk1.8/ root@slave2-ubuntu:/opt/

主节点执行,分别发送环境变量文件到从节点

scp -r -p /etc/profile root@slave1-ubuntu:/etc/
scp -r -p /etc/profile root@slave2-ubuntu:/etc/

从节点执行

source /etc/profile

四、 配置Scala环境

  • 安装配置

运行命令解压至指定路径下:

tar -zxvf /opt/soft/scala-2.12.19.tgz -C /opt/

配置环境变量:

vi /etc/profile

在文本末尾,添加以下内容:

export PATH=$PATH:/opt/scala-2.12.19/bin

添加完成后,运行命令使其立即生效

source /etc/profile

发送配置到每个从节点,并在从节点执行命令使其生效

主节点执行,分别发送scala-2.12.19到从节点

scp -r -p /opt/scala-2.12.19/ root@slave1-ubuntu:/opt/
scp -r -p /opt/scala-2.12.19/ root@slave2-ubuntu:/opt/

主节点执行,分别发送环境变量文件到从节点

scp -r -p /etc/profile root@slave1-ubuntu:/etc/
scp -r -p /etc/profile root@slave2-ubuntu:/etc/

从节点执行

source /etc/profile
  • 验证

输入命令scala,安装成功则进入Scala命令行。

scala

ctrl+d 退出scala 或 输入:quit

scala> :quit

五、 Hadoop集群安装部署(主从集群模式)

  • [1]解压

解压至指定目录

tar -zxvf /opt/soft/hadoop-3.3.0.tar.gz -C /opt/
  • [2]修改配置文件

添加环境变量

vi /etc/profile

在文件末尾添加以下内容

export HADOOP_HOME=/opt/hadoop-3.3.0
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop

添加完成后,运行命令使其立即生效

source /etc/profile
  • 修改Hadoop配置文件
  1. core-site.xml
vi /opt/hadoop-3.3.0/etc/hadoop/core-site.xml

添加以下内容:

<configuration>
	<property>
		<name>hadoop.tmp.dir</name>
			<value>/opt/hadoop-3.3.0/tmp</value>
			<description>Abase for other temporary directories.</description>
	</property>
	
	<property>
		<name>fs.defaultFS</name>
			<!-- IP地址为主节点服务器地址 -->
			<value>hdfs://IP地址:9000</value>
	</property>
	
	<property>
		<name>hadoop.proxyuser.root.hosts</name>
		<value>*</value>
	</property>
	
	<property>
		<name>hadoop.proxyuser.root.groups</name>
		<value>*</value>
	</property>
</configuration>
  1. mapred-site.xml
vi /opt/hadoop-3.3.0/etc/hadoop/mapred-site.xml

添加以下内容:

<configuration>
	<!-- 指定 MapReduce 程序运行在 Yarn 上 -->
	<property>
		<name>mapreduce.framework.name</name>
		<value>yarn</value>
	</property>

	<!-- 历史服务器端地址 -->
	<property>
		<name>mapreduce.jobhistory.address</name>
		<!-- master-ubuntu需要改为为主节点机器名-->
		<value>master-ubuntu:10020</value>
	</property>

	<!-- 历史服务器 web 端地址 -->
	<property>
		<name>mapreduce.jobhistory.webapp.address</name>
		<!-- master-ubuntu需要改为为主节点机器名-->
		<value>master-ubuntu:19888</value>
	</property>
</configuration>
  1. hdfs-site.xml
vi /opt/hadoop-3.3.0/etc/hadoop/hdfs-site.xml

添加以下内容:

<configuration>
	<!-- nn web 端访问地址-->
	<property>
		<name>dfs.namenode.http-address</name>
		<!-- master-ubuntu需要改为主节点机器名-->
		<value>master-ubuntu:9870</value>
	</property>

	<!-- 2nn web 端访问地址-->
	<property>
		<name>dfs.namenode.secondary.http-address</name>
		<!-- slave1-ubuntu需要改为第一从节点台机器名-->
		<value>slave1-ubuntu:9868</value>
	</property>
	
	<property>
		<name>dfs.permissions.enabled</name>
		<value>false</value>
	</property>
</configuration>
  1. yarn-site.xml
vi /opt/hadoop-3.3.0/etc/hadoop/yarn-site.xml

添加以下内容:

<configuration>

<!-- Site specific YARN configuration properties -->

	<!-- 指定 MR 走 shuffle -->
	<property>
		<name>yarn.nodemanager.aux-services</name>
		<value>mapreduce_shuffle</value>
	</property>

	<!-- 指定 ResourceManager 的地址-->
	<property>
		<name>yarn.resourcemanager.hostname</name>
		<!-- master-ubuntu需要改为为主节点机器名-->
		<value>master-ubuntu</value>
	</property>

	<!-- 环境变量的继承 -->
	<property>
		<name>yarn.nodemanager.env-whitelist</name>
		<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CO NF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
	</property>

	<!-- 开启日志聚集功能 -->
	<property>
		<name>yarn.log-aggregation-enable</name>
		<value>true</value>
	</property>

	<!-- 设置日志聚集服务器地址 -->
	<property>
		<name>yarn.log.server.url</name>
		<!-- IP地址为主节点机器地址 -->
		<value>http://IP地址:19888/jobhistory/logs</value>
	</property>

	<!-- 设置日志保留时间为 7 天 -->
	<property>
		<name>yarn.log-aggregation.retain-seconds</name>
		<value>604800</value>
	</property>
</configuration>
  1. workers
vi /opt/hadoop-3.3.0/etc/hadoop/workers

添加集群所有服务器的机器名:

master-ubuntu
slave1-ubuntu
slave2-ubuntu

发送配置到每个从节点,并执行命令使其生效

主节点执行,分别发送hadoop-3.3.0到从节点

scp -r -p /opt/hadoop-3.3.0 root@slave1-ubuntu:/opt/
scp -r -p /opt/hadoop-3.3.0 root@slave2-ubuntu:/opt/

主节点执行,分别发送环境变量文件到从节点

scp -r -p /etc/profile root@slave1-ubuntu:/etc/
scp -r -p /etc/profile root@slave2-ubuntu:/etc/

从节点机器执行

source /etc/profile

core-site.xml 中的 hadoop.tmp.dir的路径可以根据自己的习惯进行设置。

hdfs-site.xml 中的 dfs.namenode.name.dir和dfs.datanode.data.dir的路径可以自由设置,最好在hadoop.tmp.dir的目录下面。

  • [3]启动

首先初始化HDFS系统,在Hadoop目录下执行以下命令,或在任意目录下执行:

/opt/hadoop-3.3.0/bin/hdfs namenode -format

过程中需要进行ssh验证,之前已经登陆了,所以初始化过程直接键入Y即可,如果第一遍没有出现确认,就再次运行一边。

成功日志出现

common.Storage: Storage directory /opt/hadoop-3.3.0/tmp/dfs/name has been successfully formatted..

启动Hadoop命令:

sbin/start-all.sh
/opt/hadoop-3.3.0/sbin/start-all.sh
  • [4]验证

执行jps命令,查看主节点是否出现以下进程

NameNode、DataNode、ResourceManager、NodeManager

第一从节点是否出现

SecondaryNameNode、DataNode、NodeManager

第二从节点是否出现

DataNode、NodeManager

  • [5]报错处理

错误1:

若出现以下错误,则需要在主节点服务器和从节点第一服务器的环境变量中添加以下配置

Starting namenodes on [master-ubuntu]
ERROR: Attempting to operate on hdfs namenode as root
ERROR: but there is no HDFS_NAMENODE_USER defined. Aborting operation.
Starting datanodes
ERROR: Attempting to operate on hdfs datanode as root
ERROR: but there is no HDFS_DATANODE_USER defined. Aborting operation.
Starting secondary namenodes [slave1-ubuntu]
ERROR: Attempting to operate on hdfs secondarynamenode as root
ERROR: but there is no HDFS_SECONDARYNAMENODE_USER defined. Aborting operation.
Starting resourcemanager
ERROR: Attempting to operate on yarn resourcemanager as root
ERROR: but there is no YARN_RESOURCEMANAGER_USER defined. Aborting operation.
Starting nodemanagers
ERROR: Attempting to operate on yarn nodemanager as root
ERROR: but there is no YARN_NODEMANAGER_USER defined. Aborting operation.
vi /etc/profile

在文件末尾添加以下内容

export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root

错误2:

若出现以下错误,则需要在主节点服务器和每个从节点服务器的环境变量中添加以下配置

master-ubuntu: ERROR: JAVA_HOME is not set and could not be found.
Starting datanodes
localhost: Warning: Permanently added 'localhost' (ED25519) to the list of known hosts.
localhost: ERROR: JAVA_HOME is not set and could not be found.
Starting secondary namenodes [slave1-ubuntu]
slave1-ubuntu: ERROR: JAVA_HOME is not set and could not be found.
Starting resourcemanager
Starting nodemanagers
localhost: ERROR: JAVA_HOME is not set and could not be found.
vi /opt/hadoop-3.3.0/etc/hadoop/hadoop-env.sh

在# Set Hadoop-specific environment variables here.另起一行添加以下内容

export JAVA_HOME="/opt/jdk1.8"

六、 Zookeeper集群安装部署

  • [1]解压
tar -zxvf /opt/soft/apache-zookeeper-3.7.2-bin.tar.gz -C /opt/
mv /opt/apache-zookeeper-3.7.2-bin/ /opt/zookeeper-3.7.2/
  • [2]修改配置

复制一份配置文件,zoo_sample.cfg是不生效的,zoo.cfg才生效

cp /opt/zookeeper-3.7.2/conf/zoo_sample.cfg /opt/zookeeper-3.7.2/conf/zoo.cfg

修改zoo.cfg中用于zookeeper存储持久化数据的文件路径

vi /opt/zookeeper-3.7.2/conf/zoo.cfg

修改及添加以下内容

#修改
dataDir=/opt/zookeeper-3.7.2/zkdata
#  slave1-ubuntu改为第一从节点服务器名
#  master-ubuntu改为主节点服务器名
#  slave2-ubuntu改为第二从节点服务器名
server.1=slave1-ubuntu:2888:3888
server.2=master-ubuntu:2888:3888
server.3=slave2-ubuntu:2888:3888

保存退出后,执行命令,创建对应文件目录

mkdir /opt/zookeeper-3.7.2/zkdata

发送配置到每个节点

scp -r -p /opt/zookeeper-3.7.2/ root@slave1-ubuntu:/opt/
scp -r -p /opt/zookeeper-3.7.2/ root@slave2-ubuntu:/opt/

在zkdata文件目录中创建myid文件,写入集群节点id,每个节点id与上方server配置对应

例:主节点服务器写入2,第一从节点服务器写入1,第二从节点服务器写入3

vi /opt/zookeeper-3.7.2/zkdata/myid
  • [3]启动

在每个节点的zookeeper目录执行

/opt/zookeeper-3.7.2/bin/zkServer.sh start
  • [4]验证

执行jps命令,出现QuorumPeerMain进程为启动成功

执行bin/zkServer.sh status可以查看当前节点在zookeeper集群的角色定位,例如:leader、follower

/opt/zookeeper-3.7.2/bin/zkServer.sh status

七、 Flink集群安装部署

  • [1]解压
tar -zxvf /opt/soft/flink-1.13.6-bin-scala_2.11.tgz -C /opt/
  • [2]修改配置文件

修改flink web页面访问IP限制,改为任意节点都可以访问

vi /opt/flink-1.13.6/conf/flink-conf.yaml

改为主节点主机名,修改内容如下

#修改
jobmanager.rpc.address: master-ubuntu
#新增
jobmanager.bind-host: 0.0.0.0
taskmanager.bind-host: 0.0.0.0
taskmanager.host: master-ubuntu

#取消注释并修改
#用于Web UI,与JobManager保持一致
rest.address: master-ubuntu
rest.bind-address: 0.0.0.0

修改master文件

vi /opt/flink-1.13.6/conf/masters

改为主节点主机名,修改内容如下

master-ubuntu:8081

修改workers文件,将三台节点的主机名都写进配置文件中:

vi /opt/flink-1.13.6/conf/workers

修改内容如下

master-ubuntu
slave1-ubuntu
slave2-ubuntu

发送配置到每个节点

scp -r -p /opt/flink-1.13.6 root@slave1-ubuntu:/opt/
scp -r -p /opt/flink-1.13.6 root@slave2-ubuntu:/opt/
  • [3]启动

在flink目录下执行

/opt/flink-1.13.6/bin/start-cluster.sh
  • [4]验证

运行jps命令
主节点出现StandaloneSessionClusterEntrypoint、TaskManagerRunner进程
从节点出现TaskManagerRunner进程,则运行成功

访问 http://192.168.2.11:8081

八、 Kafka集群安装部署

  • [1]解压
tar -zxvf /opt/soft/kafka_2.12-3.6.0.tgz -C /opt/
mv /opt/kafka_2.12-3.6.0/ /opt/kafka-3.6.0/
  • [2]修改配置
vi /opt/kafka-3.6.0/config/server.properties

修改以下内容

#集群部署这里的id不能重复,示例:主节点为0,第一从节点为1,第二从节点为2
broker.id=0
#取消注释,kafka部署的机器ip和提供服务的端口号
listeners=PLAINTEXT://:9092
#修改,kafka的消息存储文件
log.dir=/opt/kafka-3.6.0/data/kafka-logs
#修改,kafka连接zookeeper的地址,分别改为主节点机器名,第一从节点机器名,第二从节点机器名
zookeeper.connect=master-ubuntu:2181,slave1-ubuntu:2181,slave2-ubuntu:2181

运行命令创建对应目录结构

mkdir /opt/kafka-3.6.0/data/
mkdir /opt/kafka-3.6.0/data/kafka-logs/

发送配置到每个节点,并更改所有机器第2步从节点的broker.id

scp -r -p /opt/kafka-3.6.0 root@slave1-ubuntu:/opt/
scp -r -p /opt/kafka-3.6.0 root@slave2-ubuntu:/opt/
  • [3]启动

在每个节点的kafka目录执行以下命令

bin/kafka-server-start.sh -daemon config/server.properties
  • [4]验证

运行jps命令,出现kafka进程则启动成功

九、 Hive集群安装部署(仅主节点)

  • [1]解压
tar -xzvf /opt/soft/apache-hive-3.1.3-bin.tar.gz -C /opt/
mv /opt/apache-hive-3.1.3-bin/ /opt/hive-3.1.3/
  • [2]修改配置

添加环境变量

vi /etc/profile

在文件末尾添加以下内容

export HIVE_HOME=/opt/hive-3.1.3
export PATH=$PATH:$HIVE_HOME/bin

添加完成后,运行命令使其立即生效

source /etc/profile

创建Hadoop目录

hadoop fs -mkdir -p /tmp
hadoop fs -mkdir -p /user/hive-3.1.3/warehouse
hadoop fs -chmod g+w  /user/hive-3.1.3/warehouse
hadoop fs -chmod g+w  /tmp

上传MySQL的驱动到hive的lib目录下
cp /opt/soft/mysql-connector-j-8.0.33.jar /opt/hive-3.1.3/lib/

进入hive配置文件目录,修改配置文件名

cd /opt/hive-3.1.3/conf
mv beeline-log4j2.properties.template beeline-log4j2.properties
mv hive-default.xml.template hive-default.xml
mv hive-env.sh.template hive-env.sh
mv hive-exec-log4j2.properties.template hive-exec-log4j2.properties
mv hive-log4j2.properties.template hive-log4j2.properties
mv llap-cli-log4j2.properties.template llap-cli-log4j2.properties
mv llap-daemon-log4j2.properties.template llap-daemon-log4j2.properties

添加hive-site.xml文件
(sed命令用于删除hive-site.xml中,与中的内容)

cp /opt/hive-3.1.3/conf/hive-default.xml /opt/hive-3.1.3/conf/hive-site.xml
sed -i '/<configuration>/,/<\/configuration>/{//!d}' /opt/hive-3.1.3/conf/hive-site.xml
  • 修改hive-site.xml
vi /opt/hive-3.1.3/conf/hive-site.xml

在hive-site.xml里添加以下内容

	<property>
		<name>hive.metastore.warehouse.dir</name>
		<value>/user/hive-3.1.3/warehouse</value>
		<description/>
	</property>
	
	<property>
		<name>javax.jdo.option.ConnectionURL</name>
		<value>jdbc:mysql://主节点IP:3306/hive?createDatabaseIfNotExist=true&amp;useSSL=false&amp;allowPublicKeyRetrieval=true</value>
		<description>数据库连接</description>
	</property>
	
	<property>
		<name>javax.jdo.option.ConnectionDriverName</name>
		<value>com.mysql.jdbc.Driver</value>
		<description/>
	</property>
	
	<property>
		<name>javax.jdo.option.ConnectionUserName</name>
		<value>root</value>
		<description/>
	</property>
	
	<property>
		<name>javax.jdo.option.ConnectionPassword</name>
		<value>Wckj@123</value>
		<description/>
	</property>
	
	<property>
		<name>hive.querylog.location</name>
		<value>/home/hadoop/logs/hive-3.1.3/job-logs/${user.name}</value>
		<description>Location of Hive run time structured log file</description>
	</property>
	
	<property>
		<name>hive.exec.scratchdir</name>
		<value>/user/hive-3.1.3/tmp</value>
	</property>

初始化hive

mkdir /opt/hive-3.1.3/logs
/opt/hive-3.1.3/bin/schematool -dbType mysql -initSchema root Wckj@123

成功返回:
Initialization script completed
schemaTool completed
  • [3]启动

执行命令

hive

后台启动命令

/opt/hive-3.1.3/bin/hive --service metastore >> ${HIVE_HOME}/logs/metastore.log 2>&1 &
/opt/hive-3.1.3/bin/hiveserver2 >> ${HIVE_HOME}/logs/hiveserver2.log 2>&1 &

返回结果为两个任务的Pid

后台启动验证验证,查看是否有两个RunJar进程:

jps
  • [4]验证

利用hive命令行查看安装结果,返回结果一致表示成功

hive
hive> show databases;

OK
default
Time taken: 0.262 seconds, Fetched: 1 row(s)

ctrl+d 退出hive

  • 调整hive数据库中文字符集乱码问题(不影响使用,按需执行):

在MySQL数据库中修改

# 修改字段注释字符集
alter table COLUMNS_V2 modify column COMMENT varchar(256) character set utf8;
# 修改表注释字符集
alter table TABLE_PARAMS modify column PARAM_VALUE varchar(2000) character set utf8;
# 修改分区参数,支持分区建用中文表示
alter table PARTITION_PARAMS modify column PARAM_VALUE varchar(2000) character set utf8;
alter table PARTITION_KEYS modify column PKEY_COMMENT varchar(2000) character set utf8;
# 修改索引名注释,支持中文表示
alter table INDEX_PARAMS modify column PARAM_VALUE varchar(4000) character set utf8;
# 修改视图,支持视图中文
ALTER TABLE TBLS modify COLUMN VIEW_EXPANDED_TEXT mediumtext CHARACTER SET utf8;
ALTER TABLE TBLS modify COLUMN VIEW_ORIGINAL_TEXT mediumtext CHARACTER SET utf8;

报错处理:
错误1:
若在初始化时出现以下错误,则需要先删除hive中guava-19.0.jar包,并将Hadoop中的guava-27.0-jre.jar包拷贝到hive的lib目录下

Exception in thread "main" java.lang.RuntimeException: com.ctc.wstx.exc.WstxEOFException: Unexpected end of input block in comment
at [row,col,system-id]: [52,16,"file:/opt/hive/conf/hive-site.xml"]
at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3051)
at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:3000)
at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2875)
at org.apache.hadoop.conf.Configuration.get(Configuration.java:1484)
at org.apache.hadoop.hive.conf.HiveConf.getVar(HiveConf.java:4999)
at org.apache.hadoop.hive.conf.HiveConf.getVar(HiveConf.java:5072)
at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:5159)
at org.apache.hadoop.hive.conf.HiveConf.<init>(HiveConf.java:5107)
at org.apache.hive.beeline.HiveSchemaTool.<init>(HiveSchemaTool.java:96)
at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:1473)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
Caused by: com.ctc.wstx.exc.WstxEOFException: Unexpected end of input block in comment

命令示例,具体guava包版本位置以实际环境为准:

rm -rf /opt/hive-3.1.3/lib/guava-19.0.jar 
cp /opt/hadoop-3.3.0/share/hadoop/common/lib/guava-27.0-jre.jar /opt/hive-3.1.3/lib/

十、 Hbase集群安装部署

  • [1]解压
tar -zxvf /opt/soft/hbase-2.5.6-bin.tar.gz -C /opt/
  • [2]修改配置

添加环境变量

vi /etc/profile

在文件末尾添加以下内容

export HBASE_HOME=/opt/hbase-2.5.6
export PATH=$PATH:$HBASE_HOME/bin

添加完成后,运行命令使其立即生效

source /etc/profile

创建日志文件夹

mkdir /opt/hbase-2.5.6/logs
vi /opt/hbase-2.5.6/conf/hbase-env.sh

修改/opt/hbase-2.5.6/hbase-env.sh文件,# Set environment variables here.另起一行添加以下内容:

# Set environment variables here.
export JAVA_HOME=/opt/jdk1.8
export HBASE_MANAGES_ZK=false
export HBASE_DISABLE_HADOOP_CLASSPATH_LOOKUP="true"
  • 修改hbase-site.xml文件
    如果Hadoop为高可用模式,则rootdir不能指定为固定IP,需按照hdfs-site.xml中指定集群名称(这里使用的主从集群模式)
vim /opt/hbase-2.5.6/conf/hbase-site.xml
	<property>
		<name>hbase.rootdir</name>
		<value>hdfs://master-ubuntu:9000/hbase</value>
	</property>
	
	<property>
		<name>hbase.cluster.distributed</name>
		<value>true</value>
	</property>
	
	<property>
		<name>hbase.zookeeper.quorum</name>
		<value>master-ubuntu:2181,slave1-ubuntu:2181,slave2-ubuntu:2181</value>
	</property>

将Hadoop配置文件,利用软连接同步到hbase配置中:

ln -s /opt/hadoop-3.3.0/etc/hadoop/core-site.xml /opt/hbase-2.5.6/conf/core-site.xml
ln -s /opt/hadoop-3.3.0/etc/hadoop/hdfs-site.xml /opt/hbase-2.5.6/conf/hdfs-site.xml

修改regionservers文件

vim /opt/hbase-2.5.6/conf/regionservers
master-ubuntu
slave1-ubuntu
slave2-ubuntu

发送配置到每个节点

scp -r -p /opt/hbase-2.5.6/ root@slave1-ubuntu:/opt/
scp -r -p /opt/hbase-2.5.6/ root@slave2-ubuntu:/opt/
  • [3]启动

切换到hbase安装目录下,执行命令

/opt/hbase-2.5.6/bin/start-hbase.sh

如果启动报错: JAVA_HOME is not set and Java could not be found

在配置文件目录下,hbase-env.sh 添加:export JAVA_HOME=/opt/jdk1.8

  • [4]验证

同一网段下的电脑浏览器访问 主节点IP:16010,出现ui界面则启动成功
http://192.168.2.14:16010/

十一、 Solr集群安装部署

  • [1]解压
tar -zxvf /opt/soft/solr-7.7.3.tgz -C /opt/
  • [2]修改配置
vim /opt/solr-7.7.3/bin/solr.in.sh

在# of this file is completely commented.另起一行,添加以下内容:

ZK_HOST="master-ubuntu:2181,slave1-ubuntu:2181,slave2-ubuntu:2181"
SOLR_HOST="master-ubuntu"
SOLR_PORT=8983

修改完成后分发各节点

scp -r -p /opt/solr-7.7.3/ root@slave1-ubuntu:/opt/
scp -r -p /opt/solr-7.7.3/ root@slave2-ubuntu:/opt/

修改从节点solr.in.sh内的SOLR_HOST为当前节点IP或对应主机名。

vim /opt/solr-7.7.3/bin/solr.in.sh

第一从节点

SOLR_HOST="slave1-ubuntu"

第二从节点

SOLR_HOST="slave2-ubuntu"

修改所有节点的zookeeper配置

vim /opt/zookeeper-3.7.2/conf/zoo.cfg

末尾添加以下内容

4lw.commands.whitelist=mntr,conf,ruok
  • [3]启动

在各节点的solr安装目录分别执行以下命令:

/opt/solr-7.7.3/bin/solr start -force
  • [4]验证
    同一网段下的电脑浏览器访问主节点IP:8983,正常出来页面且出现cloud标签,则表明solr cloud集群模式启动成功。
    http://192.168.2.11:8983/

十二、 Spark集群安装部署

  • [1]解压
tar -zxvf /opt/soft/spark-3.4.0-bin-hadoop3.tgz -C /opt/
  • [2]修改配置
mv /opt/spark-3.4.0-bin-hadoop3/conf/workers.template /opt/spark-3.4.0-bin-hadoop3/conf/workers
mv /opt/spark-3.4.0-bin-hadoop3/conf/spark-env.sh.template /opt/spark-3.4.0-bin-hadoop3/conf/spark-env.sh
mv /opt/spark-3.4.0-bin-hadoop3/conf/spark-defaults.conf.template /opt/spark-3.4.0-bin-hadoop3/conf/spark-defaults.conf

编辑workers文件

vi /opt/spark-3.4.0-bin-hadoop3/conf/workers

删除localhost,添加

master-ubuntu
slave1-ubuntu
slave2-ubuntu

编辑spark-env文件

vi /opt/spark-3.4.0-bin-hadoop3/conf/spark-env.sh

在# Copy it as spark-env.sh and edit that to configure Spark for your site.,另起一行,添加以下内容:

export JAVA_HOME=/opt/jdk1.8
export SPARK_MASTER_WEBUI_PORT=8888

修改完成后分发各节点

scp -r -p /opt/spark-3.4.0-bin-hadoop3/ root@slave1-ubuntu:/opt/
scp -r -p /opt/spark-3.4.0-bin-hadoop3/ root@slave2-ubuntu:/opt/
  • [3]启动
/opt/spark-3.4.0-bin-hadoop3/sbin/start-all.sh
  • [4]验证
    同一网段下的电脑浏览器访问 主节点IP:8888,出现ui界面则启动成功
    http://192.168.2.11:8888/

十三、 Atlas单节点安装部署(仅主节点部署)

  • [1]解压编译
tar -xzvf /opt/soft/apache-atlas-2.1.0-server.tar.gz -C /opt/
tar -xzvf /opt/soft/apache-atlas-2.1.0-hive-hook.tar.gz -C /opt/
mv /opt/apache-atlas-hive-hook-2.1.0/hook /opt/apache-atlas-2.1.0/
mv /opt/apache-atlas-hive-hook-2.1.0/hook-bin/ /opt/apache-atlas-2.1.0/
rm -rf /opt/apache-atlas-hive-hook-2.1.0/
  • [2]修改配置

在/opt/apache-atlas-2.1.0/conf/atlas-env.sh中添加HBASE_CONF_DIR

vim /opt/apache-atlas-2.1.0/conf/atlas-env.sh
#取消注释并修改
export JAVA_HOME=/opt/jdk1.8
#另起一行添加
export HBASE_CONF_DIR=/opt/hbase-2.5.6/conf
#对比配置,因为使用外部hbase和solr,所以选择false
export MANAGE_LOCAL_SOLR=false
export MANAGE_LOCAL_HBASE=false

Atlas集成Hbase

添加Hbase集群配置文件到Atlas下

ln -s /opt/hbase-2.5.6/conf/ /opt/apache-atlas-2.1.0/conf/hbase/

修改Atlas配置文件目录/opt/apache-atlas-2.1.0/conf/atlas-application.properties文件。

vim /opt/apache-atlas-2.1.0/conf/atlas-application.properties
#修改atlas存储数据主机,zookeeper地址
atlas.graph.storage.hostname=master-ubuntu:2181,slave1-ubuntu:2181,slave2-ubuntu:2181

Atlas集成Solr
修改Atlas配置文件目录/opt/apache-atlas-2.1.0/conf/atlas-application.properties文件。

vim /opt/apache-atlas-2.1.0/conf/atlas-application.properties
#修改如下配置,zookeeper地址
atlas.graph.index.search.solr.zookeeper-url=master-ubuntu:2181,slave1-ubuntu:2181,slave2-ubuntu:2181

将Atlas自带的Solr文件夹拷贝到外部Solr集群的各个节点(solr三台机器都需要拷贝)

cp -r /opt/apache-atlas-2.1.0/conf/solr/ /opt/solr-7.7.3/
mv /opt/solr-7.7.3/solr/ /opt/solr-7.7.3/atlas_conf
scp -r -p /opt/solr-7.7.3/atlas_conf/ root@slave1-ubuntu:/opt/solr-7.7.3/
scp -r -p /opt/solr-7.7.3/atlas_conf/ root@slave2-ubuntu:/opt/solr-7.7.3/

创建solr索引,主节点执行即可

/opt/solr-7.7.3/bin/solr create -c vertex_index -d /opt/solr-7.7.3/atlas_conf -shards 3 -replicationFactor 2 -force
Created collection 'vertex_index' with 3 shard(s), 2 replica(s) with config-set 'vertex_index'

/opt/solr-7.7.3/bin/solr create -c edge_index -d /opt/solr-7.7.3/atlas_conf -shards 3 -replicationFactor 2 -force
Created collection 'edge_index' with 3 shard(s), 2 replica(s) with config-set 'edge_index'

/opt/solr-7.7.3/bin/solr create -c fulltext_index -d /opt/solr-7.7.3/atlas_conf -shards 3 -replicationFactor 2 -force
Created collection 'fulltext_index' with 3 shard(s), 2 replica(s) with config-set 'fulltext_index'

Atlas集成Kafka
修改Atlas配置文件目录/opt/apache-atlas-2.1.0/conf/atlas-application.properties文件。

vim /opt/apache-atlas-2.1.0/conf/atlas-application.properties
#########  Notification Configs  #########
atlas.notification.embedded=false 如果要使用外部的kafka,则改为false
# 内嵌kafka会根据此端口启动一个zk实例
#atlas.kafka.zookeeper.connect=localhost:9026 # 如果使用外部kafka,则填写外部zookeeper地址
#atlas.kafka.bootstrap.servers=localhost:9027 # 如果使用外部kafka,则填写外部broker server地址
atlas.kafka.zookeeper.connect=master-ubuntu:2181,slave1-ubuntu:2181,slave2-ubuntu:2181
atlas.kafka.bootstrap.servers=master-ubuntu:9092,slave1-ubuntu:9092,slave2-ubuntu:9092
atlas.kafka.zookeeper.session.timeout.ms=4000
atlas.kafka.zookeeper.connection.timeout.ms=2000
atlas.kafka.enable.auto.commit=true

命令行创建topic:

/opt/kafka-3.6.0/bin/kafka-topics.sh --bootstrap-server master-ubuntu:9092,slave1-ubuntu:9092,slave2-ubuntu:9092 --create --replication-factor 3 --partitions 3 --topic ATLAS_HOOK

WARNING: Due to limitations in metric names, topics with a period ('.') or underscore ('_') could collide. To avoid issues it is best to use either, but not both.
Created topic ATLAS_HOOK
/opt/kafka-3.6.0/bin/kafka-topics.sh --bootstrap-server master-ubuntu:9092,slave1-ubuntu:9092,slave2-ubuntu:9092 --create --replication-factor 3 --partitions 3 --topic ATLAS_ENTITIES

WARNING: Due to limitations in metric names, topics with a period ('.') or underscore ('_') could collide. To avoid issues it is best to use either, but not both.
Created topic ATLAS_ENTITIES.

配置Atlas Web Server HA
修改Atlas配置文件目录/opt/apache-atlas-2.1.0/conf/atlas-application.properties文件。

vim /opt/apache-atlas-2.1.0/conf/atlas-application.properties
#########  Server Properties  #########
hive.atlas.hook=true
hive.exec.post.hooks=org.apache.atlas.hive.hook.HiveHook
atlas.rest.address=http://master-ubuntu:21000
# If enabled and set to true, this will run setup steps when the server starts
atlas.server.run.setup.on.start=false

#########  Entity Audit Configs  #########
atlas.audit.hbase.zookeeper.quorum=master-ubuntu:2181,slave1-ubuntu:2181,slave2-ubuntu:2181

#########  High Availability Configuration ########
atlas.server.ha.enabled=true
atlas.server.ids=id1
atlas.server.address.id1=master-ubuntu:21000
atlas.server.ha.zookeeper.connect=master-ubuntu:2181,slave1-ubuntu:2181,slave2-ubuntu:2181

Atlas集成Hive

进入Atlas配置文件目录:修改/opt/apache-atlas-2.1.0/conf/atlas-application.properties文件。

vim /opt/apache-atlas-2.1.0/conf/atlas-application.properties

添加内容:

######### Hive Hook Configs #######
atlas.hook.hive.synchronous=false
atlas.hook.hive.numRetries=3
atlas.hook.hive.queueSize=10000
atlas.cluster.name=primary

记录性能指标,修改进入/opt/atlas/conf/路径,修改/opt/apache-atlas-2.1.0/conf/atlas-log4j.xml文件

vim /opt/apache-atlas-2.1.0/conf/atlas-log4j.xml
#去掉如下代码的注释
	<appender name="perf_appender" class="org.apache.log4j.DailyRollingFileAppender">
		<param name="file" value="${atlas.log.dir}/atlas_perf.log" />
		<param name="datePattern" value="'.'yyyy-MM-dd" />
		<param name="append" value="true" />
		<layout class="org.apache.log4j.PatternLayout">
			<param name="ConversionPattern" value="%d|%t|%m%n" />
		</layout>
	</appender>
	
	<logger name="org.apache.atlas.perf" additivity="false">
		<level value="debug" />
		<appender-ref ref="perf_appender" />
	</logger>

将atlas-application.properties配置文件加入到atlas-plugin-classloader-2.1.0.jar中(只需要在master-ubuntu执行即可,会将master-ubuntu的atlas安装包拷贝到hive所在每一台服务器)

zip -u /opt/apache-atlas-2.1.0/hook/hive/atlas-plugin-classloader-2.1.0.jar /opt/apache-atlas-2.1.0/conf/atlas-application.properties
cp /opt/apache-atlas-2.1.0/conf/atlas-application.properties /opt/hive-3.1.3/conf/

在/opt/hive-3.1.3/conf/hive-site.xml文件中设置Atlas hook

vim /opt/hive-3.1.3/conf/hive-site.xml
	<property>
		<name>hive.exec.post.hooks</name>
		<value>org.apache.atlas.hive.hook.HiveHook</value>
	</property>
vim /opt/hive-3.1.3/conf/hive-env.sh

取消注释并修改

export HIVE_AUX_JARS_PATH=/opt/apache-atlas-2.1.0/hook/hive/atlas-plugin-classloader-2.1.0.jar,/opt/apache-atlas-2.1.0/hook/hive/hive-bridge-shim-2.1.0.jar
  • [3]启动
/opt/apache-atlas-2.1.0/bin/atlas_start.py
  • [4]验证
    http://主节点IP:21000

启动后等待10分钟打开网页
若显示登录界面,则使用下方用户名及密码登录,登录成功后正常显示网页则表示部署成功
用户名:admin 密码:admin

若显示
HTTP ERROR 503
Problem accessing /. Reason:
Service Unavailable
检查日志文件解决报错,若无报错,再次等待10分钟

/opt/apache-atlas-2.1.0/bin/atlas_admin.py -status

十四、 datax

tar -xzvf /opt/soft/datax.tar.gz -C /opt/
scp -r -p /opt/datax/ root@slave1-ubuntu:/opt/
scp -r -p /opt/datax/ root@slave2-ubuntu:/opt/

十五、 apache-seatunnel

tar -xzvf /opt/soft/apache-seatunnel-2.3.4-bin.tar.gz -C /opt/
scp -r -p /opt/apache-seatunnel-2.3.4/ root@slave1-ubuntu:/opt/
scp -r -p /opt/apache-seatunnel-2.3.4/ root@slave2-ubuntu:/opt/

十六、 apache-dolphinscheduler(单节点)

配置可参考:
https://github.com/apache/dolphinscheduler/blob/3.2.2-release/docs/docs/zh/guide/howto/datasource-setting.md

tar -xzvf /opt/soft/apache-dolphinscheduler-dev-SNAPSHOT-bin.tar.gz -C /opt/

修改dolphinscheduler_env.sh

vi /opt/apache-dolphinscheduler-dev-SNAPSHOT-bin/bin/env/dolphinscheduler_env.sh
# JAVA_HOME, will use it to start DolphinScheduler server
export JAVA_HOME=${JAVA_HOME:-/opt/jdk1.8}

# Database related configuration, set database type, username and password
export DATABASE=${DATABASE:-mysql}
export SPRING_PROFILES_ACTIVE=${DATABASE}
export SPRING_DATASOURCE_URL="jdbc:mysql://192.168.2.14:3306/dolphinscheduler"
export SPRING_DATASOURCE_USERNAME=dolphinscheduler
export SPRING_DATASOURCE_PASSWORD=Wckj@123

# DolphinScheduler server related configuration
export SPRING_CACHE_TYPE=${SPRING_CACHE_TYPE:-none}
export SPRING_JACKSON_TIME_ZONE=${SPRING_JACKSON_TIME_ZONE:-UTC}
export MASTER_FETCH_COMMAND_NUM=${MASTER_FETCH_COMMAND_NUM:-10}

# Registry center configuration, determines the type and link of the registry center
export REGISTRY_TYPE=${REGISTRY_TYPE:-zookeeper}
export REGISTRY_ZOOKEEPER_CONNECT_STRING=${REGISTRY_ZOOKEEPER_CONNECT_STRING:-master-ubuntu:2181}

# Tasks related configurations, need to change the configuration if you use the related tasks.
export HADOOP_HOME=${HADOOP_HOME:-/opt/hadoop-3.3.0}
export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-/opt/hadoop-3.3.0/etc/hadoop}
export SPARK_HOME=${SPARK_HOME:-/opt/spark-3.4.0-bin-hadoop3/}
export PYTHON_LAUNCHER=${PYTHON_LAUNCHER:-/usr/bin/python3}
export HIVE_HOME=${HIVE_HOME:-/opt/hive-3.1.3}
export FLINK_HOME=${FLINK_HOME:-/opt/flink-1.13.6/}
export DATAX_LAUNCHER=${DATAX_LAUNCHER:-/opt/datax/bin/datax.py}
export DATAX_HOME=${DATAX_HOME:-/opt/datax}
export PYTHON_HOME=${PYTHON_HOME:-/usr/bin/python3}

修改install_env.sh,保证/opt目录下没有dolphinscheduler同名文件夹

vi /opt/apache-dolphinscheduler-dev-SNAPSHOT-bin/bin/env/install_env.sh
ips=${ips:-"master-ubuntu,slave1-ubuntu,slave2-ubuntu"}
masters=${masters:-"master-ubuntu,slave1-ubuntu,slave2-ubuntu"}
workers=${workers:-"master-ubuntu:default,slave1-ubuntu:default,slave2-ubuntu:default"}
alertServer=${alertServer:-"slave2-ubuntu"}
apiServers=${apiServers:-"master-ubuntu,slave1-ubuntu,slave2-ubuntu"}
installPath=${installPath:-"/opt/dolphinscheduler"}

安装dolphinscheduler

/opt/apache-dolphinscheduler-dev-SNAPSHOT-bin/bin/install.sh

配置dolphinscheduler数据库

mysql -uroot -pWckj@123
mysql> CREATE DATABASE dolphinscheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
mysql> CREATE USER 'dolphinscheduler'@'%' IDENTIFIED BY 'Wckj@123';
mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO 'dolphinscheduler'@'%';
mysql> CREATE USER 'dolphinscheduler'@'localhost' IDENTIFIED BY 'Wckj@123';
mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO 'dolphinscheduler'@'localhost';
mysql> FLUSH PRIVILEGES;

执行dolphinscheduler数据库初始化语句

chmod u+x /opt/dolphinscheduler/tools/bin/upgrade-schema.sh
/opt/dolphinscheduler/tools/bin/upgrade-schema.sh

验证
返回结果均为[ RUNNING ],表示正常

/opt/dolphinscheduler/bin/status-all.sh

均能够正常显示仪表UI,表示正常
http://主节点IP:12345/dolphinscheduler/ui/monitor/master
http://主节点IP:12345/dolphinscheduler/ui/monitor/worker
用户名:admin 密码:dolphinscheduler123

删除dolphinscheduler安装包目录(可选)
rm -rf /opt/apache-dolphinscheduler-dev-SNAPSHOT-bin

十七、 Prometheus

检查zip文件是否损坏
unzip -t /opt/soft/prometheus.zip
解压
unzip -d /opt/ /opt/soft/prometheus.zip 
设置权限
chmod u+x /opt/prometheus/alertmanager.service /opt/prometheus/grafana-server.service /opt/prometheus/node_exporter.service /opt/prometheus/prometheus.service 
发送配置到每个节点
scp -r -p /opt/prometheus/ root@slave1-ubuntu:/opt/
scp -r -p /opt/prometheus/ root@slave2-ubuntu:/opt/

创建prometheus用户,密码为prometheus,除了密码是必须的,其他的都可以不填,直接回车即可

sudo adduser prometheus

Adding user `prometheus' ...
Adding new group `prometheus' (1001) ...
Adding new user `prometheus' (1001) with group `prometheus' ...
Creating home directory `/home/prometheus' ...
Copying files from `/etc/skel' ...
New password: 
Retype new password: 
passwd: password updated successfully
Changing the user information for prometheus
Enter the new value, or press ENTER for the default
        Full Name []: 
        Room Number []: 
        Work Phone []: 
        Home Phone []: 
        Other []: 

Is the information correct? [Y/n] root@master-ubuntu:~# y

查看添加的用户,是否成功添加prometheus用户

cat /etc/passwd

将/opt/prometheus目录及其内部所有子目录和文件的所有者和所属组都更改为prometheus

chown prometheus:prometheus -R /opt/prometheus

设置服务开机自启

主节点:
alertmanager.service

ln -s /opt/prometheus/alertmanager.service /etc/systemd/system/
systemctl enable alertmanager.service
Created symlink /etc/systemd/system/multi-user.target.wants/alertmanager.service → /opt/prometheus/alertmanager.service.

grafana-server.service

ln -s /opt/prometheus/grafana-server.service /etc/systemd/system/
systemctl enable grafana-server.service
Created symlink /etc/systemd/system/multi-user.target.wants/grafana-server.service → /opt/prometheus/grafana-server.service.

node_exporter.service

ln -s /opt/prometheus/node_exporter.service /etc/systemd/system/
systemctl enable node_exporter.service
Created symlink /etc/systemd/system/multi-user.target.wants/node_exporter.service → /opt/prometheus/node_exporter.service.

prometheus.service

ln -s /opt/prometheus/prometheus.service /etc/systemd/system/
systemctl enable prometheus.service
Created symlink /etc/systemd/system/multi-user.target.wants/prometheus.service → /opt/prometheus/prometheus.service.

更改主节点prometheus配置文件,job_name后添加-机器名,targets修改为机器名:9100,
注意缩进,要与修改前的的配置一致!!!

vi /opt/prometheus/prometheus/prometheus.yml
  - job_name: 'node-exporter-master-ubuntu'
    scrape_interval: 15s
    static_configs:
    - targets: ['master-ubuntu:9100']
      labels:
        instance: Prometheus服务器

  - job_name: 'node-exporter-slave1-ubuntu'
    scrape_interval: 15s
    static_configs:
    - targets: ['slave1-ubuntu:9100']
      labels:
        instance: Prometheus服务器

  - job_name: 'node-exporter-slave2-ubuntu'
    scrape_interval: 15s
    static_configs:
    - targets: ['slave2-ubuntu:9100']
      labels:
        instance: Prometheus服务器
systemctl daemon-reload

第一从节点、第二从节点分别执行:

node_exporter.service

ln -s /opt/prometheus/node_exporter.service /etc/systemd/system/
systemctl enable node_exporter.service

# 返回结果:
Created symlink /etc/systemd/system/multi-user.target.wants/node_exporter.service → /opt/prometheus/node_exporter.service.
systemctl daemon-reload

启动
主节点

systemctl start alertmanager.service
systemctl start grafana-server.service
systemctl start node_exporter.service
systemctl start prometheus.service

查看状态,显示Active: active (running)为正常:

systemctl status alertmanager.service
systemctl status grafana-server.service
systemctl status node_exporter.service
systemctl status prometheus.service

第一从节点、第二从节点

systemctl start node_exporter.service

查看状态,显示Active: active (running)为正常:

systemctl status node_exporter.service

验证
http://主节点IP:9090/,出现ui界面则启动成功
http://主节点IP:9093/,出现ui界面则启动成功
http://主节点IP:3000/,出现ui界面则启动成功

访问Grafana并更改登录密码为admin,首次登录需要更改密码(需要连接广域网)
http://主节点IP:3000/
用户名:admin 密码:admin

左下角齿轮图标–>Configuration–>Add data source–>Prometheus–>URL一栏填充默认:http://localhost:9090,下翻到底部点击 Save & test

左侧㗊图标Dashboards–> + Import --> Import via grafana.com一栏填充 1860–>点击右侧load–>点击Select a Prometheus data source,选择Prometheus (default)–>点击下方Import

点击顶部Job右侧的下拉列表,可以看到四个选项,node-exporter,node-exporter-master-ubuntu,node-exporter-slave1-ubuntu,node-exporter-slave2-ubuntu,分别查看三台机器是否正常显示监测数据

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值