离线数仓搭建——2.组件安装

组件安装

在这里插入图片描述

安装前准备

1.创建用户 一个temp用户

[root@bogon /]# adduser temp
[root@bogon /]# passwd temp
更改用户 temp 的密码 。
新的 密码:
无效的密码: 密码少于 8 个字符
重新输入新的 密码:
passwd:所有的身份验证令牌已经成功更新。
[root@bogon /]# 

2.将/opt 下的module software 权限修改为temp用户

[root@bogon /]# chown -R temp:temp /opt/module/ /opt/software/
[root@bogon /]# cd /opt/
[root@bogon opt]# ll
总用量 0
drwxr-xr-x. 4 temp temp 47 3月  31 13:20 module
drwxr-xr-x. 3 temp temp 27 3月  31 13:00 software
[root@bogon opt]# 

3.配置temp用户具有root权限

[root@bogon opt]# vi /etc/sudoers
#添加93行内容
     91 ## Allow root to run any commands anywhere
     92 root    ALL=(ALL)       ALL
     93 temp    ALL=(ALL)       ALL

3…编写 主机名称映射/etc/hosts

[root@bogon opt]# vi /etc/hosts
#添加以下内容 
192.168.170.102 hadoop102
192.168.170.103 hadoop103
192.168.170.104 hadoop104

4…配置 window的主机映射文件C:\Windows\System32\drivers\etc

192.168.170.102 hadoop102
192.168.170.103 hadoop103
192.168.170.104 hadoop104

5.安装以下工具

sudo yum install -y epel-release 
sudo yum install -y psmisc nc net-tools rsync vim lrzsz ntp libzstd openssl-static tree iotop git

6.修改主机名称 重启

hostnamectl --static set-hostname hadoop102

关闭防火墙

sudo systemctl stop firewalld
sudo systemctl disable firewalld

7.克隆两台机器

修改网络配置 103 104

修改主机名称 hadoop103 hadoop104

8.编写集群分发脚本

在/home/temp/bin目录下创建xsync文件,以便全局调用

#!/bin/bash
#1. 判断参数个数
if [ $# -lt 1 ]
then
  echo Not Enough Arguement!
  exit;
fi
#2. 遍历集群所有机器
for host in hadoop102 hadoop103 hadoop104
do
  echo ====================  $host  ====================
  #3. 遍历所有目录,挨个发送
  for file in $@
  do
    #4 判断文件是否存在
    if [ -e $file ]
    then
      #5. 获取父目录
      pdir=$(cd -P $(dirname $file); pwd)
      #6. 获取当前文件的名称
      fname=$(basename $file)
      ssh $host "mkdir -p $pdir"
      rsync -av $pdir/$fname $host:$pdir
    else
      echo $file does not exists!
    fi
  done
done

9.配置SSH免密连接

[temp@hadoop102 ~]$ pwd
/home/temp
[temp@hadoop102 ~]$ ls -alh
总用量 12K
drwx------. 2 temp temp  62 3月  31 17:48 .
drwxr-xr-x. 3 root root  18 3月  31 17:27 ..
-rw-r--r--. 1 temp temp  18 8月   3 2017 .bash_logout
-rw-r--r--. 1 temp temp 193 8月   3 2017 .bash_profile
-rw-r--r--. 1 temp temp 231 8月   3 2017 .bashrc
[temp@hadoop102 ~]$ ssh hadoop102
#yes 回车
[temp@hadoop102 ~]$ ls -alh
总用量 12K
drwx------. 3 temp temp  74 3月  31 17:49 .
drwxr-xr-x. 3 root root  18 3月  31 17:27 ..
-rw-r--r--. 1 temp temp  18 8月   3 2017 .bash_logout
-rw-r--r--. 1 temp temp 193 8月   3 2017 .bash_profile
-rw-r--r--. 1 temp temp 231 8月   3 2017 .bashrc
drwx------. 2 temp temp  25 3月  31 17:49 .ssh
[temp@hadoop102 ~]$ cd .ssh/
[temp@hadoop102 .ssh]$ ll
总用量 4
-rw-r--r--. 1 temp temp 187 3月  31 17:49 known_hosts
[temp@hadoop102 .ssh]$ ssh-keygen -t rsa
#3次回车
[temp@hadoop102 .ssh]$ ll
总用量 12
-rw-------. 1 temp temp 1679 3月  31 17:50 id_rsa
-rw-r--r--. 1 temp temp  396 3月  31 17:50 id_rsa.pub
-rw-r--r--. 1 temp temp  187 3月  31 17:49 known_hosts
[temp@hadoop102 .ssh]$ ssh-copy-id hadoop102
[temp@hadoop102 .ssh]$ ls -alh
总用量 16K
drwx------. 2 temp temp   80 3月  31 17:50 .
drwx------. 3 temp temp   74 3月  31 17:49 ..
-rw-------. 1 temp temp  396 3月  31 17:50 authorized_keys
-rw-------. 1 temp temp 1.7K 3月  31 17:50 id_rsa
-rw-r--r--. 1 temp temp  396 3月  31 17:50 id_rsa.pub
-rw-r--r--. 1 temp temp  187 3月  31 17:49 known_hosts
[temp@hadoop102 .ssh]$ 

将 .ssh 发放给 hadoop103 hadoop104

集群版本

产品版本
Hadoop3.1.3
Flume1.9.0
Kafka2.11-2.4.1
Hive3.1.2
Sqoop1.4.6
MySQL5.7.xs
Azkaban2.5.0
Java1.8
Zookeeper3.5.7

zookeeper 搭建

1.解压

[temp@hadoop102 software]$ tar -zxvf apache-zookeeper-3.5.7-bin.tar.gz -C /opt/module/
[temp@hadoop102 module]$ mv apache-zookeeper-3.5.7-bin zookeeper

2.创建zkData

[temp@hadoop102 module]$ cd zookeeper/
[temp@hadoop102 zookeeper]$ pwd
/opt/module/zookeeper
[temp@hadoop102 zookeeper]$ mkdir zkData

3.配置 zoo.cfg

[temp@hadoop102 zookeeper]$ cd conf/
[temp@hadoop102 conf]$ ll
总用量 12
-rw-r--r--. 1 temp temp  535 5月   4 2018 configuration.xsl
-rw-r--r--. 1 temp temp 2712 2月   7 2020 log4j.properties
-rw-r--r--. 1 temp temp  922 2月   7 2020 zoo_sample.cfg
[temp@hadoop102 conf]$ mv zoo_sample.cfg zoo.cfg 
[temp@hadoop102 conf]$ vi zoo.cfg 
#修改 存放数据的目录
dataDir=/opt/module/zookeeper/zkData/
#添加集群信息
server.2=hadoop102:2888:3888
server.3=hadoop103:2888:3888
server.4=hadoop104:2888:3888

4.在zkData下创建myid

[temp@hadoop102 zookeeper]$ vim zkData/myid
#添加的数字就是zoo.cfg 中集群配置信息中的server.x的x
2

5.分发到hadoop103 hadoop104

修改zkData下myid的数字

[temp@hadoop102 module]$ xsync /opt/module/zookeeper/

6.启动集群

[temp@hadoop102 module]$ /opt/module/zookeeper/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/module/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[temp@hadoop103 module]$ /opt/module/zookeeper/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/module/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[temp@hadoop104 module]$ /opt/module/zookeeper/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/module/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

7.编写一个脚本

[temp@hadoop102 bin]$ vim /home/temp/bin/zk.sh 
#zookeeper 启动脚本
!#/bin/bash
case $1 in 
"start"){
	for i in hadoop102 hadoop103 hadoop104
	do
		ssh $i "/opt/module/zookeeper/bin/zkServer.sh start"
	done
};;
"stop"){
	for i in hadoop102 hadoop103 hadoop104
	do
		ssh $i "/opt/module/zookeeper/bin/zkServer.sh stop"
	done
};;
"status"){
	for i in hadoop102 hadoop103 hadoop104
	do
		ssh $i "/opt/module/zookeeper/bin/zkServer.sh status"
	done
};;
esac  

8.编写一个脚本查看集群情况

[temp@hadoop102 bin]$ vim /home/temp/bin/xcall.sh

#!/bin/bash

for i in hadoop102 hadoop103 hadoop104
do
        echo "================ $i ==================="
        ssh $i "$*"
done

hadoop 搭建

https://hadoop.apache.org/docs/r3.1.3/hadoop-project-dist/hadoop-common/ClusterSetup.html

1.解压

[temp@hadoop102 hadoop-3.1.3]$ tar -zxvf /opt/software/hadoop-3.1.3.tar.gz -C /opt/module/

2.配置环境变量

[temp@hadoop102 hadoop-3.1.3]$ sudo vim /etc/profile.d/my_env.sh 
#添加hadoop环境变量
#HADOOP_HOME
HADOOP_HOME=/opt/module/hadoop-3.1.3
PATH=$PATH:$HADOOP_HOME/bin
PATH=$PATH:$HADOOP_HOME/sbin

[temp@hadoop102 hadoop-3.1.3]$ source /etc/profile.d/my_env.sh 
[temp@hadoop102 hadoop-3.1.3]$ echo $HADOOP_HOME
/opt/module/hadoop-3.1.3

3.修改5个文件

core-site.xml

mapred-site.xml

hdfs-site.xml

yarn-site.xml

workers

vim $HADOOP_HOME/etc/hadoop/core-site.xml
#添加以下内容
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://hadoop102:8020</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/opt/module/hadoop-3.1.3/data</value>
    </property>
</configuration>



vim $HADOOP_HOME/etc/hadoop/hdfs-site.xml
<configuration>
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>hadoop104:9868</value>
    </property>
    <!-- 指定HDFS副本的数量 -->
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
</configuration>



vim $HADOOP_HOME/etc/hadoop/yarn-site.xml
#添加以下内容

<configuration>
	<property>
	    <name>yarn.nodemanager.aux-services</name>
	    <value>mapreduce_shuffle</value>
	</property>	
	<property>
	    <name>yarn.resourcemanager.hostname</name>
	    <value>hadoop103</value>
	</property>
	<property>
	    <name>yarn.nodemanager.env-whitelist</name>
	       <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
	</property>

	<!--运行内存设置 -->
	<property>
	    <name>yarn.scheduler.minimum-allocation-mb</name>
	    <value>1024</value>
	</property>
    <property>
	    <name>yarn.scheduler.maximum-allocation-mb</name>
	    <value>5120</value>
	</property>
	<property>
	    <name>yarn.nodemanager.resource.memory-mb</name>
	    <value>5120</value>
	</property>
	<!-- 忽略内存/cpu检查 -->
	<property>
	    <name>yarn.nodemanager.pmem-check-enabled</name>
	    <value>false</value>
	</property>
	<property>
	    <name>yarn.nodemanager.vmem-check-enabled</name>
	    <value>false</value>
	</property>

	<!--日志聚集 -->
	<property>
	    <name>yarn.log-aggregation-enable</name>
	    <value>true</value>
	</property>
	<property>  
	    <name>yarn.log.server.url</name>  
	    <value>http://${yarn.timeline-service.webapp.address}/applicationhistory/logs</value>
	</property>
	<property>
	    <name>yarn.log-aggregation.retain-seconds</name>
	    <value>604800</value>
	</property>
	<property>
	    <name>yarn.timeline-service.enabled</name>
	    <value>true</value>
	</property>
	<property>
	    <name>yarn.timeline-service.hostname</name>
	    <value>${yarn.resourcemanager.hostname}</value>
	</property>
	<property>
	    <name>yarn.timeline-service.http-cross-origin.enabled</name>
	    <value>true</value>
	</property>
	<property>
	    <name>yarn.resourcemanager.system-metrics-publisher.enabled</name>
	    <value>true</value>
	</property>

</configuration>





vim $HADOOP_HOME/etc/hadoop/mapred-site.xml
#添加以下内容
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
</property>

<!-- 历史服务器端地址 -->
<property>
    <name>mapreduce.jobhistory.address</name>
    <value>hadoop102:10020</value>
</property>
<!-- 历史服务器web端地址 -->
<property>
    <name>mapreduce.jobhistory.webapp.address</name>
    <value>hadoop102:19888</value>
</property>

</configuration>



vim $HADOOP_HOME/etc/hadoop/workers
#添加以下内容 workers 文件不能有换行不能有空格
hadoop102 
hadoop103
hadoop104

4.将hadoop-3.1.3及my_env.sh分发到其他节点

[temp@hadoop102 module]$ scp /etc/profile.d/my_env.sh root@hadoop103:/etc/profile.d/
[temp@hadoop102 module]$ scp /etc/profile.d/my_env.sh root@hadoop104:/etc/profile.d/
[temp@hadoop102 module]$ xsync /opt/module/hadoop-3.1.3/
[temp@hadoop102 module]$ xcall.sh source /etc/profile.d/my_env.sh

5.格式化NameNode

[temp@hadoop102 hadoop-3.1.3]$ hdfs namenode -formate

6.启动集群

启动HDFS

start-dfs.sh

在配置了ResourceManager的节点(hadoop103)启动YARN

start-yarn.sh

Web端查看HDFS的Web页面:http://hadoop102:9870/

Web端查看SecondaryNameNode http://hadoop104:9868/status.html

在hadoop102启动历史服务器

mapred --daemon start historyserver

查看JobHistory http://hadoop102:19888/jobhistory

启动日志聚集

注意:开启日志聚集功能,需要重新启动NodeManager 、ResourceManager和HistoryManager

查看日志http://hadoop102:19888/jobhistory


HDFS HA

hadoop102hadoop103hadoop104
NameNodeNameNodeNameNode
ZKFCZKFCZKFC
JournalNodeJournalNodeJournalNode
DataNodeDataNodeDataNode
ZKZKZK
ResourceManager
NodeManagerNodeManagerNodeManager

QJM模式

https://hadoop.apache.org/docs/r3.1.3/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html

删除/opt/module/hadoop-3.1.3/data/ /opt/module/hadoop-3.1.3/log/

修改core-site.xml

<configuration>
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://mycluster</value>
  </property>
  <property>
    <name>hadoop.data.dir</name>
    <value>/opt/hadoop-3.1.3/data</value>
  </property>

    <!-- 自动故障转移 -->
    <property>
	<name>ha.zookeeper.quorum</name>
	<value>hadoop102:2181,hadoop103:2181,hadoop104:2181</value>
</property>

</configuration>

修改hdfs-site.xml

<configuration>
  <property>
    <name>dfs.namenode.name.dir</name>
    <value>file://${hadoop.data.dir}/name</value>
  </property>
  <property>
    <name>dfs.datanode.data.dir</name>
    <value>file://${hadoop.data.dir}/data</value>
  </property>
  <property>
    <name>dfs.nameservices</name>
    <value>mycluster</value>
  </property>
  <property>
    <name>dfs.ha.namenodes.mycluster</name>
    <value>nn1,nn2, nn3</value>
  </property>
  <property>
    <name>dfs.namenode.rpc-address.mycluster.nn1</name>
    <value>hadoop102:8020</value>
  </property>
  <property>
    <name>dfs.namenode.rpc-address.mycluster.nn2</name>
    <value>hadoop103:8020</value>
  </property>
  <property>
    <name>dfs.namenode.rpc-address.mycluster.nn3</name>
    <value>hadoop104:8020</value>
  </property>
  <property>
    <name>dfs.namenode.http-address.mycluster.nn1</name>
    <value>hadoop102:9870</value>
  </property>
  <property>
    <name>dfs.namenode.http-address.mycluster.nn2</name>
    <value>hadoop103:9870</value>
  </property>
  <property>
    <name>dfs.namenode.http-address.mycluster.nn3</name>
    <value>hadoop104:9870</value>
  </property>
  <property>
    <name>dfs.namenode.shared.edits.dir</name>
    <value>qjournal://hadoop102:8485;hadoop103:8485;hadoop104:8485/mycluster</value>
  </property>
  <property>
    <name>dfs.client.failover.proxy.provider.mycluster</name>
    <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
  </property>
  <property>
    <name>dfs.ha.fencing.methods</name>
    <value>sshfence</value>
  </property>
  <property>
    <name>dfs.ha.fencing.ssh.private-key-files</name>
    <value>/home/temp/.ssh/id_ecdsa</value>
  </property>
  <property>
    <name>dfs.journalnode.edits.dir</name>
    <value>${hadoop.data.dir}/jn</value>
  </property>
<!-- 自动故障转移 -->    
<property>
	<name>dfs.ha.automatic-failover.enabled</name>
	<value>true</value>
</property

    
</configuration>

同步到hadoop102 hadoop103

在三台机器上都启动journalnode服务

hdfs --daemon start journalnode

在nn1(hadoop102)上格式化并启动namenode

hdfs namenode -format
hdfs --daemon start namenode

在nn2 nn3上同步nn1的元数据

hdfs namenode -bootstrapStandby

=====================================================
About to bootstrap Standby ID nn2 from:
           Nameservice ID: mycluster
        Other Namenode ID: nn1
  Other NN's HTTP address: http://hadoop102:9870
  Other NN's IPC  address: hadoop102/192.168.170.102:8020
             Namespace ID: 1954271648
            Block pool ID: BP-1123891949-192.168.170.102-1617200167349
               Cluster ID: CID-4ce53d63-d09c-44cd-a93b-0b6af9afa548
           Layout version: -64
       isUpgradeFinalized: true
=====================================================

启动nn2 nn3 上的namenode

hdfs --daemon start namenode

查看namenode信息

http://hadoop102:9870

http://hadoop103:9870

http://hadoop104:9870

都是standby

将nn1 切换为 Active

hdfs haadmin -transitionToActive nn1

#查看nn1状态
hdfs haadmin -getServiceState nn1

配置HDFS-HA自动故障转移

在hdfs-site.xml中增加

<property>
    <name>dfs.ha.automatic-failover.enabled</name>
    <value>true</value>
</property>

在core-site.xml文件中增加

<property>
    <name>ha.zookeeper.quorum</name>
    <value>hadoop102:2181,hadoop103:2181,hadoop104:2181</value>
</property>

关闭所有HDFS服务

stop-dfs.sh

初始化HA在Zookeeper中状态

hdfs zkfc -formatZK

启动HDFS服务

start-dfs.sh

YARN HA

http://hadoop.apache.org/docs/r3.1.3/hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html

hadoop102hadoop103hadoop104
NameNodeNameNodeNameNode
ZKFCZKFCZKFC
JournalNodeJournalNodeJournalNode
DataNodeDataNodeDataNode
ZKZKZK
ResourceManagerResourceManager
NodeManagerNodeManagerNodeManager

修改yarn-site.xml

注释掉原来的yarn.resourcemanager.hostname

<!--    
        <property>
            <name>yarn.resourcemanager.hostname</name>
            <value>hadoop103</value>
        </property>
-->

在其后面添加以下内容

<!--启用resourcemanager ha-->
    <property>
        <name>yarn.resourcemanager.ha.enabled</name>
        <value>true</value>
    </property>
 
    <!--声明两台resourcemanager的地址-->
    <property>
        <name>yarn.resourcemanager.cluster-id</name>
        <value>cluster-yarn1</value>
    </property>

    <property>
        <name>yarn.resourcemanager.ha.rm-ids</name>
        <value>rm1,rm2</value>
    </property>

    <property>
        <name>yarn.resourcemanager.hostname.rm1</name>
        <value>hadoop102</value>
    </property>

    <property>
        <name>yarn.resourcemanager.hostname.rm2</name>
        <value>hadoop103</value>
    </property>
 
    <!--指定zookeeper集群的地址--> 
    <property>
        <name>yarn.resourcemanager.zk-address</name>
        <value>hadoop102:2181,hadoop103:2181,hadoop104:2181</value>
    </property>

    <!--启用自动恢复--> 
    <property>
        <name>yarn.resourcemanager.recovery.enabled</name>
        <value>true</value>
    </property>
 
    <!--指定resourcemanager的状态信息存储在zookeeper集群--> 
    <property>
        <name>yarn.resourcemanager.store.class</name>     	 <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
</property>

同步到其他节点

启动zk集群

启动namenode

在hadoop102上启动yarn

start-yarn.sh

查看服务状态

yarn rmadmin -getServiceState rm1
yarn rmadmin -getServiceState rm2

至此hadoop就搭建完成

[temp@hadoop102 hadoop]$ xcall.sh jps
================ hadoop102 ===================
7700 QuorumPeerMain
8020 NameNode
8408 JournalNode
9082 ResourceManager
9226 NodeManager
8156 DataNode
8637 DFSZKFailoverController
9887 Jps
================ hadoop103 ===================
4625 QuorumPeerMain
5346 ResourceManager
4826 NameNode
6219 Jps
4925 DataNode
5039 JournalNode
5439 NodeManager
================ hadoop104 ===================
5441 JournalNode
6212 Jps
5767 NodeManager
5034 QuorumPeerMain
5228 NameNode
5327 DataNode
[temp@hadoop102 hadoop]$ 

flume


http://flume.apache.org/

http://flume.apache.org/releases/content/1.9.0/FlumeUserGuide.html

http://flume.apache.org/releases/content/1.9.0/FlumeDeveloperGuide.html

1.解压

[temp@hadoop102 software]$ tar -zxvf apache-flume-1.9.0-bin.tar.gz -C /opt/module/
[temp@hadoop102 module]$ mv apache-flume-1.9.0-bin/ flume

2.修改flume运行的jdk环境

[temp@hadoop102 module]$ cd flume/conf/
[temp@hadoop102 conf]$ mv flume-env.sh.template flume-env.sh
[temp@hadoop102 conf]$ vim flume-env.sh 
#修改JAVA_HOME
export JAVA_HOME=/opt/module/jdk1.8.0_212/

3.分发

[temp@hadoop102 module]$ xsync flume/

4.Ganglia监控

修改配置文件/etc/httpd/conf.d/ganglia.conf

[temp@hadoop102 flume]$ sudo vim /etc/httpd/conf.d/ganglia.conf
#修改如下内容
#
# Ganglia monitoring system php web frontend
#

Alias /ganglia /usr/share/ganglia

<Location /ganglia>
  #Order deny,allow
  #Deny from all
  #Allow from all
  #Allow from 127.0.0.1
  #Allow from ::1
  # Allow from .example.com
  Require all granted
</Location>        

修改配置文件/etc/ganglia/gmetad.conf

[temp@hadoop102 flume]$ sudo vim /etc/ganglia/gmetad.conf
#修改如下内容
data_source "my_cluster" 192.168.170.102

修改配置文件/etc/ganglia/gmond.conf

[temp@hadoop102 flume]$ sudo vim /etc/ganglia/gmond.conf
#修改如下内容
 29 cluster {
 30   name = "my_cluster"
 31   owner = "unspecified"
 32   latlong = "unspecified"
 33   url = "unspecified"
 34 }
 35 
 36 /* The host section describes attributes of the host, like the location */
 37 host {
 38   location = "unspecified"
 39 }
 40 
 41 /* Feel free to specify as many udp_send_channels as you like.  Gmond
 42    used to only support having a single channel */
 43 udp_send_channel {
 44   #bind_hostname = yes # Highly recommended, soon to be default.
 45                        # This option tells gmond to use a source address
 46                        # that resolves to the machine's hostname.  Without
 47                        # this, the metrics may appear to come from any
 48                        # interface and the DNS names associated with
 49                        # those IPs will be used to create the RRDs.
 50  # mcast_join = 239.2.11.71
 51   host = 192.168.170.102
 52   port = 8649
 53   ttl = 1
 54 }
 55 
 56 /* You can specify as many udp_recv_channels as you like as well. */
 57 udp_recv_channel {
 58   #mcast_join = 239.2.11.71
 59   port = 8649
 60   #bind = 239.2.11.71
 61   bind = 192.168.170.102
 62   retry_bind = true
 63   # Size of the UDP buffer. If you are handling lots of metrics you really
 64   # should bump it up to e.g. 10MB or even higher.
 65   # buffer = 10485760
 66 }

修改配置文件/etc/selinux/config

[temp@hadoop102 flume]$ sudo vim /etc/selinux/config

#修改如下内容
SELINUX=disabled

尖叫提示:selinux本次生效关闭必须重启,如果此时不想重启,可以临时生效之:

sudo setenforce 0
sudo chmod -R 777 /var/lib/ganglia

启动ganglia

sudo service httpd start
sudo service gmetad start
sudo service gmond start

sudo service httpd stop
sudo service gmetad stop
sudo service gmond stop

http://192.168.170.102/ganglia/


kafka


http://kafka.apache.org/documentation/#configuration

1.解压

[temp@hadoop102 software]$ tar -zxvf kafka_2.11-2.4.1.tgz -C /opt/module/
[temp@hadoop102 module]$ mv kafka_2.11-2.4.1/ kafka

2.修改server.properties配置文件

[temp@hadoop102 module]$ cd kafka/config
[temp@hadoop102 config]$ vim server.properties
#修改下列内容

#broker的全局唯一编号,不能重复
broker.id=0
#删除topic功能使能
delete.topic.enable=true
#kafka运行日志存放的路径
log.dirs=/opt/module/kafka/logs
#配置连接Zookeeper集群地址 存放在/kafka下
zookeeper.connect=hadoop102:2181,hadoop103:2181,hadoop104:2181/kafka

3.分发

[temp@hadoop102 kafka]$ xsync /opt/module/kafka/

修改server.properties中的broker.id

4.启动

启动前确保 zk集群已启动

[temp@hadoop102 kafka]$ xcall.sh jps
================ hadoop102 ===================
7700 QuorumPeerMain
11545 Jps
================ hadoop103 ===================
7392 Jps
4625 QuorumPeerMain
================ hadoop104 ===================
5034 QuorumPeerMain
7327 Jps
[temp@hadoop102 kafka]$ 


[temp@hadoop102 kafka]$ xcall.sh /opt/module/kafka/bin/kafka-server-start.sh -daemon /opt/module/kafka/config/server.properties 
================ hadoop102 ===================
================ hadoop103 ===================
================ hadoop104 ===================
[temp@hadoop102 kafka]$ xcall.sh jps
================ hadoop102 ===================
12483 Jps
7700 QuorumPeerMain
12388 Kafka
================ hadoop103 ===================
4625 QuorumPeerMain
8213 Kafka
8309 Jps
================ hadoop104 ===================
8228 Jps
5034 QuorumPeerMain
8141 Kafka
[temp@hadoop102 kafka]$

5.群起脚本

#!/bin/bash

case $1 in
"start"){
    for i in hadoop102 hadoop103 hadoop104
    do
        echo "=============== $i kafka start ================="
        ssh $i "/opt/module/kafka/bin/kafka-server-start.sh -daemon /opt/module/kafka/config/server.properties"
    done

};;
"stop"){
    for i in hadoop102 hadoop103 hadoop104
    do
        echo "=============== $i kafka stop ================="
        ssh $i "/opt/module/kafka/bin/kafka-server-stop"
    done
};;
esac

6.Kafka监控(Kafka Eagle)

http://download.kafka-eagle.org/

https://docs.kafka-eagle.org/2.env-and-install/2.installing

监控需要使用mysql 确保mysql已安装

解压kafka-eagle-bin-1.3.7.tar.gz

[temp@hadoop102 software]$ tar -zxvf kafka-eagle-bin-1.3.7.tar.gz 

进入 kafka-eagle-bin-1.3.7并将kafka-eagle-web-1.3.7-bin.tar.gz解压到/opt/module/

[temp@hadoop102 software]$ cd kafka-eagle-bin-1.3.7
[temp@hadoop102 kafka-eagle-bin-1.3.7]$ tar -zxvf kafka-eagle-web-1.3.7-bin.tar.gz -C /opt/module/

配置环境变量保存退出并刷新

[temp@hadoop102 kafka-eagle-web-1.3.7]$ sudo vim /etc/profile.d/my_env.sh
#KE_HOME
export KE_HOME=/opt/module/kafka-eagle-web-1.3.7
export PATH=$PATH:$KE_HOME/bin

给启动文件执行权限

[temp@hadoop102 kafka-eagle-web-1.3.7]$ pwd
/opt/module/kafka-eagle-web-1.3.7
[temp@hadoop102 kafka-eagle-web-1.3.7]$ chmod -R 777 bin/

修改system-config.properties配置文件

[temp@hadoop102 conf]$ vim /opt/module/kafka-eagle-web-1.3.7/conf/system-config.properties
#修改如下几处
######################################
# multi zookeeper&kafka cluster list
######################################
kafka.eagle.zk.cluster.alias=cluster1
cluster1.zk.list=hadoop102:2181,hadoop103:2181,hadoop104:2181

######################################
# kafka offset storage
######################################
cluster1.kafka.eagle.offset.storage=kafka

######################################
# enable kafka metrics
######################################
#Kafka系统默认是没有开启JMX端口的,所以Kafka Eagle的监控趋势图默认采
#用不启用的方式,即kafka.eagle.metrics.charts=false 将其修改为true
kafka.eagle.metrics.charts=true

######################################
# kafka jdbc driver address
######################################
kafka.eagle.driver=com.mysql.jdbc.Driver
kafka.eagle.url=jdbc:mysql://hadoop102:3306/ke?useUnicode=true&characterEncoding=UTF-8&zeroDateTimeBehavior=convertToNull&useSSL=false
kafka.eagle.username=root
kafka.eagle.password=root

修改kafka-server-start.sh

[temp@hadoop102 conf]$ vim /opt/module/kafka/bin/kafka-server-start.sh 
#修改如下位置
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
export KAFKA_HEAP_OPTS="-server -Xms2G -Xmx2G -XX:PermSize=128m -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -XX:ParallelGCThreads=8 -XX:ConcGCThreads=5 -XX:InitiatingHeapOccupancyPercent=70"
    # 这里的端口不一定非要设置成9999,端口只要可用,均可。
    export JMX_PORT="9999"
#    export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
fi

启动zk

启动kafka

启动kafka-eagle

[temp@hadoop102 kafka-eagle-web-1.3.7]$ pwd
/opt/module/kafka-eagle-web-1.3.7
[temp@hadoop102 kafka-eagle-web-1.3.7]$ bin/ke.sh start
*******************************************************************
* Kafka Eagle Service has started success.
* Welcome, Now you can visit 'http://192.168.170.102:8048/ke'
* Account:admin ,Password:123456
*******************************************************************
* <Usage> ke.sh [start|status|stop|restart|stats] </Usage>
* <Usage> https://www.kafka-eagle.org/ </Usage>
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值