shell自动化编程,集群环境配置文件自动化修改

记录一次大数据集群搭建自动化配置配置文件的过程

一、需求

基于zookeeper,kafka,elasticsearch,flink,minio,doris,nifi集群的大数据项目环境搭建的集群环境,需要按照指定配置文件,进行集群环境搭建

二、工具

shell(比较通用,任何服务器都可以使用,不用适配,当然最简单的是python)

三、设计流程

将各个服务对应安装到指定节点,更具shell脚本获取当前主机名,对应配置文件配置的主机节点进行循环遍历,如果找到对应节点则需要按照该节点的配置进行替换节点名称,也有需要配置其他服务集群节点的则需要将配置结果循环拼接成功再赋值如kafka的zookeeper.connect=node01:2181,node02:2181,node03:2181需要循环zookeeper节点进行拼接端口,最后根据节点将prometheus.yml的监控配置文件进行全部替换,由于该文件是yaml文件,对空格和tab有绝对的要求,所以配置文件需要严格按照规范进行编写,编写方式怎使用的echo命令

四、技术总结:
1、shell中获取主机名,直接给变量hostname赋值当前主机的主机名
#``表示里边是需要执行的shell命令
hostname=`hostname`
2、读取配置文件并生成数组对象
#通过cat命令读取config配置文件中采用grep命令筛选出带有doris.be的一行数据并通过awk采用=进行划分并输出第二个参数()表示结果按照空格封装成数组
dorisBeHosts=(`cat $configFile|grep "doris.be"|awk -F = '{print $2}'`)
3、shell脚本中方法和函数的应用
#方法会先执行前边的初始化,然后执行main方法,然后执行main方法中的其他方法,方法执行如果没有参数传递的话只用一个方法名就好,有参数的话用$1 $2 $3 来表示方法后边的参数,方法后的参数更具空格来划分
#!/bin/bash
hostname=`hostname`
initHome=$(dirname $(cd `dirname $0`; pwd))
main(){	
	init_zookeeper;
	init_kafka;
	init_es;
	init_minio;
	init_doris_be;
	init_doris_fe;
	init_nifi;
	init_flink;
	init_redis; 
	init_prometheus;	
	
}
main


function aa(){
	echo $1
}
aa 223

#输出结果  223
4、格式化输出日志带颜色的其中 \n 表示换行
echo -e "\033[32m [-] this hostname is $hostname \n \033[0m"   #echo -e "\033[32m 绿色字体 \033[0m"
echo -e "\033[31m [-] zookeeper is not installed,please check the config.ini file \n \033[0m"  #echo -e "\033[31m 红色字体 \033[0m"
echo -e "\033[33m [-] this host is not a zookeeper node \n \033[0m"  #echo -e "\033[33m 黄色字体 \033[0m"

5、for循环使用,如果变量是一个用()自动分割的数组则有两种方式进行for循环
	#第一种
	for node in ${esHosts[@]};do
		if [ $node == $hostname ];then			
			
		fi
	done
	#第二种  ${#dorisBeHosts[@]}表示数组变量dorisBeHosts长度
	for((i=0;i<${#dorisBeHosts[@]};i++));do
        if [ ${dorisBeHosts[$i]} == $hostname ];then
        
        fi
    done

6、根据指定的shell命令按照一定规则截取字符串
#服务器上执行ip a s结果如下
ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:a5:10:b6 brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.115/24 brd 192.168.100.255 scope global eno16777736
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fea5:10b6/64 scope link 
       valid_lft forever preferred_lft forever
#先根据网卡过滤出指定网卡的信息,再根据inet过滤出带有ipv4ip的一行数据 采用 awk 对数据进行按照' '空格进行分组使用 print取数组第二个数据$2
network=`ip a s | grep eno16777736 | grep 'inet' |awk -F ' ' {'print $2}'`
输出结果是 192.168.100.115/24
7、像文件写入信息的方式有比较方便的有三种

a、覆盖 使用 echo “文件内容” > file 向指定文件覆盖写入指定内容

b、追加 使用 echo “文件内容” >> file 向指定文件追加写入指定内容

c、更新某一行内容 sed -i “s#^zookeeper.connect=.*#zookeeper.connect=$zookeeperNodes#g” $kafkaConfig 将baohanzookeeper.connect=的这一行内容替换问#号后边的内容 注意sed是基于行来处理的文件流编辑器,如果文件为空的话,它是处理不了的

8、if语句使用目前用到两种如果是字符串比较或者boolean判断用 [ 表达式 ] 如果是数字比较用(( 表达式 )) 注意不管是哪一种方式 表达式前后一定要有空格
if [ $node == $hostname ];then
		
fi
	for((i=0;i<${#nodeHosts[@]};i++));do
		if (( $i == 0 ));then
			
		elif (( $i == ${#nodeHosts[@]}-1 ));then
			
		else
		
		fi
	done
9、字符串操作
#a、截取最后一个字符
zkNode
${zkNode: -1}
#b、拼接字符串后删除最后一个多余的字符
zookeeperNodes=""
for zkNode in ${zkHosts[@]};do
	zookeeperNodes+=$zkNode:2181,
done
#zookeeperNodes结果如下node01:2181,node02:2181:node03:2181, 需要将zookeeperNodes的最后一个,删除
zookeeperNodes=${zookeeperNodes%?}
10、判断文件是否存在 [ -f $kafkaConfig ]
if [ -f $kafkaConfig ];then

fi
五、项目案例
1、集群配置文件编写
#doris集群节点配置
doris.host=node01 node02 node03  
#doris be节点配置
doris.be=node01 node02 node03
#doris be网卡配置
doris.network.be=ens3 ens3 ens3
#doris fe节点配置
doris.fe=node01 node02 node03
#doris fe网卡配置
doris.network.fe=ens3 ens3 ens3
#zookeeper集群节点配置
zookeeper=node01 node02 node03
#kafka集群节点配置
kafka=node01 node02 node03
#es集群节点配置
elasticsearch.hosts=node01 node02 node03
#es数据节点配置
elasticsearch.workers=["node01:9300","node02:9300","node03:9300"]
#es主节点配置
elasticsearch.masters=["node01","node02"]
#minio集群节点配置
minio=node01 node02 node03
#flink主节点配置
flink.master=node01 node02
#flink工作节点配置
flink.worker=node01 node02 node03
#flink节点配置
flink.hosts=node01 node02 node03
#主机采集服务基于nodeExporter的采集服务节点
node.host=node01 node02 node03
node.ip=192.168.0.22 192.168.0.23 192.168.0.24
#nifi单机版服务器配置
nifi=node03
#redis节点配置
redis=node01
#prometheus节点配置
prometheus=node01

2、编写脚本如下
#!/bin/bash
hostname=`hostname`
initHome=$(dirname $(cd `dirname $0`; pwd))
configFile=${initHome}/setups/config.ini
#configFile=/superred/base/setup/config.ini
dorisBeHosts=(`cat $configFile|grep "doris.be"|awk -F = '{print $2}'`)
dorisFeHosts=(`cat $configFile|grep "doris.fe"|awk -F = '{print $2}'`)
dorisBeNetworks=(`cat $configFile|grep "doris.network.be"|awk -F = '{print $2}'`)
dorisFeNetworks=(`cat $configFile|grep "doris.network.fe"|awk -F = '{print $2}'`)
dorisHosts=(`cat $configFile|grep "doris.host"|awk -F = '{print $2}'`)
zkHosts=(`cat $configFile|grep "zookeeper"|awk -F = '{print $2}'`)
kafkaHosts=(`cat $configFile|grep "kafka"|awk -F = '{print $2}'`)
esHosts=(`cat $configFile|grep "elasticsearch.hosts"|awk -F = '{print $2}'`)
esMasters=(`cat $configFile|grep "elasticsearch.masters"|awk -F = '{print $2}'`)
esWorkers=(`cat $configFile|grep "elasticsearch.workers"|awk -F = '{print $2}'`)
flinkHosts=(`cat $configFile|grep "flink.hosts"|awk -F = '{print $2}'`)
flinkMasters=(`cat $configFile|grep "flink.master"|awk -F = '{print $2}'`)
flinkWorkers=(`cat $configFile|grep "flink.worker"|awk -F = '{print $2}'`)
minioHosts=(`cat $configFile|grep "minio"|awk -F = '{print $2}'`)
nifiHost=`cat $configFile|grep "nifi"|awk -F = '{print $2}'`
redisHost=`cat $configFile|grep "redis"|awk -F = '{print $2}'`
prometheusHost=`cat $configFile|grep "prometheus"|awk -F = '{print $2}'`
nodeHosts=(`cat $configFile|grep "node.host"|awk -F = '{print $2}'`)
nodeIps=(`cat $configFile|grep "node.ip"|awk -F = '{print $2}'`)

zookeeperMyid="/opt/base/zookeeper/data/myid"
zookeeperConfig="/opt/base/zookeeper/conf/zoo.cfg"
zookeeperExporterConfig="/opt/base/zookeeper/exporter/exporter.conf"
kafkaConfig="/opt/base/kafka/config/server.properties"
kafkaExporterConfig="/opt/base/kafka/exporter/exporter.conf"
esConfig="/opt/base/elasticsearch/config/elasticsearch.yml"
esExporterConfig="/opt/base/elasticsearch/exporter/exporter.conf"
minioConfig="/opt/base/minio/config.ini"
nifiConfig="/opt/base/nifi/conf/nifi.properties"
dorisBeConfig="/opt/base/doris/be/conf/be.conf"
dorisFeConfig="/opt/base/doris/fe/conf/fe.conf"
flinkConfig="/opt/base/flink/conf/flink-conf.yaml"
flinkWorkerConfig="/opt/base/flink/conf/workers"
flinkMasterConfig="/opt/base/flink/conf/masters"
redisExporterConfig="/opt/base/redis/exporter/exporter.conf"
prometheusConfig="/opt/base/prometheus/prometheus.yml"
prometheusMysqlConfig="${initHome}/setups/sql/prometheusConfig.sql"

isZkNode=0
isKafkaNode=0
isEsNode=0
isDorisBeNode=0
isDorisFeNode=0
isFlinkNode=0
isNifiNode=0
isMinioNode=0
isRedisNode=0
isPrometheusNode=0

configStr=""

echo -e
echo -e "\033[32m [-] this hostname is $hostname \n \033[0m"


#echo "doris.be:" ${dorisBeHosts[@]}
#echo "doris.fe:"${dorisFeHosts[@]}
#echo "doris.be.network:"${dorisBeNetworks[@]}
#echo "doris.fe.network:"${dorisFeNetworks[@]}
#echo "zookeeper:"${zookeeperHosts[@]}
#echo "kafka:"${kafkaHosts[@]}
#echo "es.master:"${esMaster[@]}
#echo "es.worker:"${esWorkers[@]}
#echo "flink.master:"${flinkMasters[@]}
#echo "flink.worker:"${flinkWorkers[@]}
#echo "nifi:"${nifiHost[@]}
#echo "minio:"${minioHosts[@]}
## 初始化zookeeper配置文件
init_zookeeper(){
	for node in ${zkHosts[@]}; do
		if [ $node == $hostname ];then
			echo -e "\033[32m [-] start init zookeeper \n \033[0m"
			isZkNode=1
			if [ -f $zookeeperMyid ];then				
				for zkNode in ${zkHosts[@]};do	
					if [ `grep -c "server.${zkNode: -1}" $zookeeperConfig`  -ne '0' ];then
						sed -i "s#^server.${zkNode: -1}=.*#server.${zkNode: -1}=${zkNode}:2888:3888#g"  $zookeeperConfig				
					else
						echo "server.${zkNode: -1}=${zkNode}:2888:3888" >> $zookeeperConfig
					fi					
				done
				
				echo `echo ${hostname: -1}` > $zookeeperMyid
				echo "zk-hosts=$hostname:2181" > $zookeeperExporterConfig				
				echo -e "\033[32m [-] zookeeper init success \n \033[0m"				
			else
				echo -e "\033[31m [-] zookeeper is not installed,please check the config.ini file \n \033[0m"
				exit 1
			fi
				
				
		fi
	done
	if [ $isZkNode == 0 ];then
		echo -e "\033[33m [-] this host is not a zookeeper node \n \033[0m"
	fi
	return 0
}
## 初始化kafka配置文件
init_kafka(){
	for node in ${kafkaHosts[@]};do
		if [ $node == $hostname ];then
			echo -e "\033[32m [-] start init kafka \n \033[0m"
			isKafkaNode=1
			if [ -f $kafkaConfig ];then
				sed -i "s#^broker.id=.*#broker.id=${hostname: -1}#g"  $kafkaConfig
				sed -i "s#^advertised.listeners=.*#advertised.listeners=PLAINTEXT://$hostname:9092#g"  $kafkaConfig
				#获取zookeeper集群的节点配置
				zookeeperNodes=""
				for zkNode in ${zkHosts[@]};do
					zookeeperNodes+=$zkNode:2181,
				done
				#zookeeperNodes结果如下node01:2181,node02:2181:node03:2181, 需要将zookeeperNodes的最后一个,删除
				zookeeperNodes=${zookeeperNodes%?}
				sed -i "s#^zookeeper.connect=.*#zookeeper.connect=$zookeeperNodes#g"  $kafkaConfig				
				echo "kafka.server=$hostname:9092" > $kafkaExporterConfig				
				echo -e "\033[32m [-] kafka init success \n \033[0m"			
			else
				echo -e "\033[31m [-] kafka is not installed, please check the config.ini file \n \033[0m"
				exit 1
			fi
		fi
	done
	if [ $isKafkaNode == 0 ];then
		echo -e "\033[33m [-] this host is not a kafka node \n \033[0m"
	fi
	return 0
}
## 初始化elasticsearch配置文件
init_es(){
	for node in ${esHosts[@]};do
		if [ $node == $hostname ];then			
			echo -e "\033[32m [-] start init elasticsearch \n \033[0m"
			isEsNode=1
			if [ -f $esConfig ];then
				sed -i "s#^node.name:.*#node.name: $hostname#g"  $esConfig
				sed -i "s#^network.host:.*#network.host: $hostname#g"  $esConfig
				sed -i "s#^network.publish_host:.*#network.publish_host: $hostname#g"  $esConfig					
				
				sed -i "s#^cluster.initial_master_nodes:.*#cluster.initial_master_nodes: $esMasters#g"  $esConfig
				sed -i "s#^discovery.seed_hosts:.*#discovery.seed_hosts: $esWorkers#g"  $esConfig			
					
				echo "es.uri=http://$hostname:9200" > $esExporterConfig
				
				echo -e "\033[32m [-] elasticsearch init success \n \033[0m"
			else
				echo -e "\033[31m [-] elasticsearch is not installed, please check the config.ini file \n \033[0m"
				exit 1
			fi
		fi
	done

	if [ $isEsNode == 0 ];then
		echo -e "\033[33m [-] this host is not a elasticsearch node \n \033[0m"
	fi
	return 0
}

## 初始化minio配置文件
init_minio(){
	for node in ${minioHosts[@]};do
		if [ $node == $hostname ];then			
			echo -e "\033[32m [-] start init minio \n \033[0m"
			isMinioNode=1
			minioNodes=""
			for((i=0;i<${#minioHosts[@]};i++));do
				
				if [ $i == 0 ];then
					minioNodes=${minioHosts[i]}	
				else
					minioNodes="$minioNodes ${minioHosts[i]}"
				fi
			done		
			if [ -f $minioConfig ];then
				sed -i "s#^minio.addr=.*#minio.addr=$minioNodes#g"  $minioConfig
				echo -e "\033[32m [-] minio init success \n \033[0m"
			else
				echo -e "\033[31m [-] minio is not installed, please check the config.ini file \n \033[0m"
				exit 1
			fi
		fi
	done
	if [ $isMinioNode == 0 ];then
		echo -e "\033[33m [-] this host is not a minio node \n \033[0m"
	fi
	return 0
}

## 初始化nifi配置文件
init_nifi(){
	if [ $nifiHost == $hostname ];then		
		echo -e "\033[32m [-] start init nifi \n \033[0m"
		isNifiNode=1
		if [ -f $nifiConfig ];then
			sed -i "s#^nifi.remote.input.host=.*#nifi.remote.input.host=$nifiHost#g"  $nifiConfig
			sed -i "s#^nifi.web.http.host=.*#nifi.web.http.host=$nifiHost#g"  $nifiConfig			
			echo -e "\033[32m [-] nifi init success \n \033[0m"
		else
			echo -e "\033[31m [-] nifi is not installed, please check the config.ini file \n \033[0m"
			exit 1
		fi
	fi
	if [ $isNifiNode == 0 ];then
		echo -e "\033[33m [-] this host is not a nifi node \n \033[0m"		
	fi
	return 0
}
## 初始化doris be配置文件
init_doris_be(){
	if [ ${#dorisBeHosts[@]} == ${#dorisBeNetworks[@]} ];then
		for((i=0;i<${#dorisBeHosts[@]};i++));do
			if [ ${dorisBeHosts[$i]} == $hostname ];then
				echo -e "\033[32m [-] start init doris be \n \033[0m"
				isDorisBeNode=1
				if [ -f $dorisBeConfig ];then					
					network=`ip a s | grep ${dorisBeNetworks[$i]} | grep 'inet' |awk -F ' ' {'print $2}'`
					sed -i "s#^priority_networks =.*#priority_networks = $network#g"  $dorisBeConfig
					echo -e "\033[32m [-] doris be init success \n \033[0m"
				else
					echo -e "\033[31m [-] doris be is not installed, please check the config.ini file \n \033[0m"
					exit 1
				fi
			fi
		done
	else
		echo -e "\033[31m [-] the dorisBeNetworks num is not equle the dorisBeHosts num, please check dorisBeHosts and dorisBeNetworks in config.ini file \n \033[0m"
		exit 1
	fi
	if [ $isDorisBeNode == 0 ];then
		echo -e "\033[33m [-] this host is not a doris be node \n \033[0m"
	fi
	return 0
}

## 初始化doris fe配置文件
init_doris_fe(){
	if [ ${#dorisFeHosts[@]} == ${#dorisFeNetworks[@]} ];then
		for((i=0;i<${#dorisFeHosts[@]};i++));do
			if [ ${dorisFeHosts[$i]} == $hostname ];then				
				echo -e "\033[32m [-] start init doris fe \n \033[0m"
				isDorisFeNode=1
				if [ -f $dorisFeConfig ];then					
					network=`ip a s | grep ${dorisFeNetworks[$i]} | grep 'inet' |awk -F' ' {'print $2}'`
					sed -i "s#^priority_networks =.*#priority_networks = $network#g"  $dorisFeConfig
					echo -e "\033[32m [-] doris fe init success \n \033[0m"
				else
					echo -e "\033[31m [-] doris fe is not installed, please check the config.ini file \n \033[0m"
					exit 1
				fi
			fi
		done
	else
		echo -e "\033[31m [-] the dorisFeNetworks num is not equle the dorisFeHosts num, please check dorisFeHosts and dorisFeNetworks in config.ini file \n \033[0m"
		exit 1
	fi
	if [ $isDorisFeNode == 0 ];then
		echo -e "\033[33m [-] this host is not a doris fe node \n \033[0m"
	fi
	return 0
}

## 初始化flink配置文件
init_flink(){
	for node in ${flinkHosts[@]};do
		if [ $node == $hostname ];then			
			echo -e "\033[32m [-] start init flink \n \033[0m"
			isFlinkNode=1
			if [ -f $flinkConfig ];then
				if [ ${#flinkMasters[@]} == 0 ];then
					echo -e "\033[31m [-] please check the flink.masters node num \n \033[0m"
					exit 1
				fi
				if [ ${#minioHosts[@]} == 0 ];then
					echo -e "\033[31m [-] please check the minio node num \n \033[0m"
					exit 1
				fi
				sed -i "s#^jobmanager.rpc.address:.*#jobmanager.rpc.address: ${flinkMasters[0]}#g"  $flinkConfig
				sed -i "s#^s3.endpoint:.*#s3.endpoint: http://${minioHosts[0]}:9000#g"  $flinkConfig
				#获取zookeeper集群的节点配置
				zookeeperNodes=""
				for zkNode in ${zkHosts[@]};do
					zookeeperNodes+=$zkNode:2181,
				done
				#zookeeperNodes结果如下node01:2181,node02:2181:node03:2181, 需要将zookeeperNodes的最后一个,删除
				zookeeperNodes=${zookeeperNodes%?}
				sed -i "s#^high-availability.zookeeper.quorum:.*#high-availability.zookeeper.quorum: $zookeeperNodes#g"  $flinkConfig
				for((i=0;i<${#flinkMaster[@]};i++));do
					if [ $i == 0 ];then
						echo "${flinkMaster[i]}:8081" > $flinkMasterConfig
					else
						echo "${flinkMaster[i]}:8081" >> $flinkMasterConfig
					fi
				done				
				for((i=0;i<${#flinkWorkers[@]};i++));do
					if [ $i == 0 ];then
						echo "${flinkWorkers[i]}" > $flinkWorkerConfig
					else
						echo "${flinkWorkers[i]}" >> $flinkWorkerConfig
					fi
				done				
				echo -e "\033[32m [-] flink init success \n \033[0m"
			else
				echo -e "\033[31m [-] flink is not installed, please check the config.ini file \n \033[0m"
				exit 1
			fi
		fi
	done
	if [ $isFlinkNode == 0 ];then
		echo -e "\033[33m [-] this host is not a flink node \n \033[0m"
	fi
	return 0

}
init_redis(){
	if [ $redisHost == $hostname ];then		
		echo -e "\033[32m [-] start init redis \n \033[0m"
		isRedisNode=1
		if [ -f $redisExporterConfig ];then
			sed -i "s#^redis.addr=.*#redis.addr=$redisHost:6379#g"  $redisExporterConfig			
			echo -e "\033[32m [-] redis init success \n \033[0m"
		else
			echo -e "\033[31m [-] redis is not installed, please check the config.ini file \n \033[0m"
			exit 1
		fi
	fi
	if [ $isRedisNode == 0 ];then
		echo -e "\033[33m [-] this host is not a redis node \n \033[0m"
	fi
	return 0
}

init_prometheus(){
	if [ $prometheusHost == $hostname ];then		
		echo -e "\033[32m [-] start init prometheus \n \033[0m"
		isRedisNode=1
		if [ -f $prometheusConfig ];then
			add_prometheus_head;
			add_prometheus_redis;
			add_prometheus_nifi;
			add_prometheus_node;
			add_prometheus_zk;
			add_prometheus_kafka;
			add_prometheus_es;
			add_prometheus_minio;
			add_prometheus_flink;
			add_prometheus_doris;
			#初始化mysql中的服务监控配置sql
			init_service_config;			
			echo -e "\033[32m [-] prometheus init success \n \033[0m"
		else
			echo -e "\033[31m [-] prometheus is not installed, please check the config.ini file \n \033[0m"
			exit 1
		fi
	fi
	if [ $isRedisNode == 0 ];then
		echo -e "\033[33m [-] this host is not a prometheus node \n \033[0m"
	fi
	return 0
}
add_prometheus_head(){
	echo "global:" > $prometheusConfig
	echo "  scrape_interval: 15s" >> $prometheusConfig
	echo "  evaluation_interval: 15s " >> $prometheusConfig
	echo "alerting:" >> $prometheusConfig
	echo "  alertmanagers:" >> $prometheusConfig
	echo "    - static_configs:" >> $prometheusConfig
	echo "        - targets:" >> $prometheusConfig
	echo "rule_files:" >> $prometheusConfig	
	
	echo "scrape_configs:" >> $prometheusConfig
}
add_prometheus_redis(){
	echo "  - job_name: 'redis'" >> $prometheusConfig
	echo "    scrape_interval: 10s" >> $prometheusConfig
	echo "    static_configs:" >> $prometheusConfig
	echo "      - targets: ['$redisHost:9121']" >> $prometheusConfig
	echo "        labels:" >> $prometheusConfig
	echo "          instance: redis-01" >> $prometheusConfig
}
add_prometheus_nifi(){
	echo "  - job_name: 'nifi'" >> $prometheusConfig
	echo "    static_configs:" >> $prometheusConfig
	echo "      - targets: ['$nifiHost:19092']" >> $prometheusConfig
	echo "" >> $prometheusConfig
	
}
add_prometheus_node(){
	if [ ${#nodeHosts[@]} == 0 ];then
		echo -e "\033[31m [-] please check the server.data num \n \033[0m"
		exit 1
	fi
	configStr=""
	for node in ${nodeHosts[@]};do
		configStr+=\'$node:9101\',
	done
	#将nodeConfig的最后一个,删除
	configStr=${configStr%?}

	echo "  - job_name: \"server-data\"" >> $prometheusConfig
	echo "    static_configs:" >> $prometheusConfig
	echo "      - targets: [$configStr]" >> $prometheusConfig	
}
add_prometheus_zk(){
	if [ ${#zkHosts[@]} == 0 ];then
		echo -e "\033[31m [-] please check the zookeeper node num \n \033[0m"
		exit 1
	fi
	configStr=""
	for node in ${zkHosts[@]};do
		configStr+=\'$node:9141\',
	done
	#将nodeConfig的最后一个,删除
	configStr=${configStr%?}

	echo "  - job_name: \"zookeeper\"" >> $prometheusConfig
	echo "    scrape_interval: 60s" >> $prometheusConfig
	echo "    metrics_path: '/metrics'" >> $prometheusConfig
	echo "    static_configs:" >> $prometheusConfig
	echo "      - targets: [$configStr]" >> $prometheusConfig
	
}

add_prometheus_kafka(){
	if [ ${#kafkaHosts[@]} == 0 ];then
		echo -e "\033[31m [-] please check the kafka node num \n \033[0m"
		exit 1
	fi
	configStr=""
	for node in ${kafkaHosts[@]};do
		configStr+=\'$node:9308\',
	done
	#将nodeConfig的最后一个,删除
	configStr=${configStr%?}
	
	echo "  - job_name: 'kafka'" >> $prometheusConfig
	echo "    static_configs:" >> $prometheusConfig
	echo "      - targets: [$configStr]" >> $prometheusConfig
	
	
}
add_prometheus_es(){
	if [ ${#esHosts[@]} == 0 ];then
		echo -e "\033[31m [-] please check the elasticsearch.hosts node num \n \033[0m"
		exit 1
	fi	
	configStr=""
	for node in ${esHost[@]};do
		configStr+=\'$node:9114\',
	done
	#将nodeConfig的最后一个,删除
	configStr=${configStr%?}
	echo "  - job_name: 'elasticsearch'" >> $prometheusConfig
	echo "    scrape_interval: 60s" >> $prometheusConfig
	echo "    scrape_timeout:  30s" >> $prometheusConfig
	echo "    metrics_path: \"/metrics\"" >> $prometheusConfig
	echo "    static_configs:" >> $prometheusConfig
	echo "      - targets: [$confiStr]" >> $prometheusConfig
	echo "        labels:" >> $prometheusConfig
	echo "          service: elasticsearch" >> $prometheusConfig
	
}

add_prometheus_minio(){
	if [ ${#minioHosts[@]} == 0 ];then
		echo -e "\033[31m [-] please check the minio node num \n \033[0m"
		exit 1
	fi	
	configStr=""
	for node in ${minioHosts[@]};do
		configStr+=\'$node:9000\',
	done
	#将nodeConfig的最后一个,删除
	configStr=${configStr%?}
	
	echo "  - job_name: \"minio\"" >> $prometheusConfig
	echo "    metrics_path: /minio/v2/metrics/cluster" >> $prometheusConfig
	echo "    scheme: http" >> $prometheusConfig
	echo "    static_configs:" >> $prometheusConfig
	echo "      - targets: [$configStr]" >> $prometheusConfig	
}
add_prometheus_flink(){
	if [ ${#flinkHosts[@]} == 0 ];then
		echo -e "\033[31m [-] please check the flink node num \n \033[0m"
		exit 1
	fi	
	configStr=""
	for node in ${flinkHosts[@]};do
		configStr+=\'$node:9249\',
	done
	#将nodeConfig的最后一个,删除
	configStr=${configStr%?}
	echo "  - job_name: 'flink'" >> $prometheusConfig
	echo "    static_configs:" >> $prometheusConfig
	echo "      - targets: [$configStr]" >> $prometheusConfig
	echo "        labels:" >> $prometheusConfig
	echo "          group: job_manager" >> $prometheusConfig	
	configStr=""
	for node in ${flinkHosts[@]};do
		configStr+=\'$node:9250\',
	done
	#将nodeConfig的最后一个,删除
	configStr=${configStr%?}
	echo "      - targets: [$configStr]" >> $prometheusConfig
	echo "        labels:" >> $prometheusConfig
	echo "          group: task_manager" >> $prometheusConfig
}

add_prometheus_doris(){
	if [ ${#dorisFeHosts[@]} == 0 ];then
		echo -e "\033[31m [-] please check the doris.fe num \n \033[0m"
		exit 1
	fi
	
	configStr=""
	for node in ${dorisFeHosts[@]};do
		configStr+=\'$node:8030\',
	done
	#将nodeConfig的最后一个,删除
	configStr=${configStr%?}
	echo "  - job_name: 'doris'" >> $prometheusConfig
	echo "    metrics_path: '/metrics'" >> $prometheusConfig
	echo "    static_configs: " >> $prometheusConfig
	echo "      - targets: [$configStr]" >> $prometheusConfig
	echo "        labels:" >> $prometheusConfig
	echo "          group: fe" >> $prometheusConfig
	if [ ${#dorisBeHosts[@]} == 0 ];then
		echo -e "\033[31m [-] please check the doris.be num \n \033[0m"
		exit 1
	fi
	
	configStr=""
	for node in ${dorisBeHosts[@]};do
		configStr+=\'$node:8040\',
	done
	#将nodeConfig的最后一个,删除
	configStr=${configStr%?}
	echo "      - targets: [$configStr]" >> $prometheusConfig
	echo "        labels:" >> $prometheusConfig
	echo "          group: be" >> $prometheusConfig	
}

init_service_config(){
	if [ -f $prometheusMysqlConfig ];then
		
		echo "TRUNCATE TABLE \`sys_prometheus_config\`;" > $prometheusMysqlConfig
		echo "TRUNCATE TABLE \`sys_prometheus_server\`;" >> $prometheusMysqlConfig
		
		init_service_node;
		init_service_redis;
		init_service_nifi;
		init_service_zk;
		init_service_kafka;
		init_service_es;
		init_service_minio;
		init_service_flink;
		init_service_doris;			
		
	else
		echo -e "\033[31m [-] please check the prometheusConfig.sql \n \033[0m"
		exit 1
	fi
}
init_service_node(){

	if [ ${#nodeHosts[@]} == 0 ];then
		echo -e "\033[31m [-] please check the server.data num \n \033[0m"
		exit 1
	fi
	for((i=0;i<${#nodeHosts[@]};i++));do
		if (( $i == 0 ));then
			echo "INSERT INTO \`sys_prometheus_config\` VALUES (${nodeHosts[$i]: -1}, 'front', '集群管理节点', '${nodeIps[$i]}', 1, '管理节点', 'server-data', '${nodeHosts[$i]}:9101', '/grafana/d/9CWBzd1f0bik001/linuxzhu-ji-xiang-qing?orgId=1&var-project=All&var-job=server-data&var-node=${nodeHosts[$i]}:9101&var-hostname=${nodeHosts[$i]}&var-device=ens3&var-maxmount=%2Fdata1&var-show_hostname=${nodeHosts[$i]}');" >> $prometheusMysqlConfig
		elif (( $i == ${#nodeHosts[@]}-1 ));then
			echo "INSERT INTO \`sys_prometheus_config\` VALUES (${nodeHosts[$i]: -1}, 'back', '集群管理节点', '192.168.0.24', 1, '接入服务器', 'server-data', '${nodeHosts[$i]}:9101', '/grafana/d/9CWBzd1f0bik001/linuxzhu-ji-xiang-qing?orgId=1&var-project=All&var-job=server-data&var-node=${nodeHosts[$i]}:9101&var-hostname=${nodeHosts[$i]}&var-device=ens3&var-maxmount=%2Fdata1&var-show_hostname=${nodeHosts[$i]}');" >> $prometheusMysqlConfig
		else
			echo "INSERT INTO \`sys_prometheus_config\` VALUES (${nodeHosts[$i]: -1}, 'middle', '集群管理节点', '192.168.0.23', 1, '管理节点', 'server-data', '${nodeHosts[$i]}:9101', '/grafana/d/9CWBzd1f0bik001/linuxzhu-ji-xiang-qing?orgId=1&var-project=All&var-job=server-data&var-node=${nodeHosts[$i]}:9101&var-hostname=${nodeHosts[$i]}&var-device=ens3&var-maxmount=%2Fdata1&var-show_hostname=${nodeHosts[$i]}');" >> $prometheusMysqlConfig
		fi
	done
}
init_service_redis(){
	echo "INSERT INTO \`sys_prometheus_server\` VALUES (null, ${redisHost: -1}, 'redis', 'redis', 'redis-01', '/grafana/d/NzaDzdBWz/redisxiang-qing?orgId=1&refresh=30s');" >> $prometheusMysqlConfig
}
init_service_nifi(){
	echo "INSERT INTO \`sys_prometheus_server\` VALUES (null, ${nifiHost: -1}, 'nifi', 'nifi', '${nifiHost}:19092', '/grafana/d/6qFIRLgGz/nifi-prometheusreportingtask-dashboard?orgId=1&refresh=5');" >> $prometheusMysqlConfig
}
init_service_zk(){
	if [ ${#zkHosts[@]} == 0 ];then
		echo -e "\033[31m [-] please check the zookeeper node num \n \033[0m"
		exit 1
	fi	
	for node in ${zkHosts[@]};do
		echo "INSERT INTO \`sys_prometheus_server\` VALUES (null, ${node: -1}, 'zookeeper', 'zookeeper', '$node:9141', '/grafana/d/4HhbN1BZk/zookeeper-exporter-dabealu?orgId=1');" >> $prometheusMysqlConfig
	done
}
init_service_kafka(){
	if [ ${#kafkaHosts[@]} == 0 ];then
		echo -e "\033[31m [-] please check the kafka node num \n \033[0m"
		exit 1
	fi	
	for node in ${kafkaHosts[@]};do
		echo "INSERT INTO \`sys_prometheus_server\` VALUES (null, ${node: -1}, 'kafka', 'kafka', '$node:9308', '/grafana/d/jwPKIsniz/kafka-dashboard?orgId=1&refresh=1m');" >> $prometheusMysqlConfig
	done
}
init_service_es(){
	if [ ${#esHosts[@]} == 0 ];then
		echo -e "\033[31m [-] please check the elasticsearch.hosts node num \n \033[0m"
		exit 1
	fi	
	for node in ${esHosts[@]};do
		echo "INSERT INTO \`sys_prometheus_server\` VALUES (null, ${node: -1}, 'elasticsearch', 'elasticsearch', '$node:9114', '/grafana/d/aR2uqq14z/elasticsearch?orgId=1');" >> $prometheusMysqlConfig
	done
}
init_service_minio(){
	if [ ${#minioHosts[@]} == 0 ];then
		echo -e "\033[31m [-] please check the minio node num \n \033[0m"
		exit 1
	fi	
	
	for node in ${minioHosts[@]};do
		echo "INSERT INTO \`sys_prometheus_server\` VALUES (null, ${node: -1}, 'minio', 'minio', '$node:9000', '/grafana/d/TgmJnqnnk/minio-dashboard?orgId=1');" >> $prometheusMysqlConfig
	done
}
init_service_flink(){
	if [ ${#flinkHosts[@]} == 0 ];then
		echo -e "\033[31m [-] please check the flink node num \n \033[0m"
		exit 1
	fi	
	configStr=""
	for node in ${flinkHosts[@]};do
		echo "INSERT INTO \`sys_prometheus_server\` VALUES (null, ${node: -1}, 'flink', 'flink', '$node:9250', '/grafana/d/wKbnD5Gnk/apache-flink-2021-dashboard-for-job-task-manager?orgId=1');" >> $prometheusMysqlConfig
	done
}
init_service_doris(){
	if [ ${#dorisHosts[@]} == 0 ];then
		echo -e "\033[31m [-] please check the doris.hosts node num \n \033[0m"
		exit 1
	fi	
	configStr=""
	for node in ${dorisHosts[@]};do
		echo "INSERT INTO \`sys_prometheus_server\` VALUES (null, ${node: -1}, 'doris', 'doris', '$node:8040', '/grafana/d/1fFiWJ4mz/doris-overview?orgId=1');" >> $prometheusMysqlConfig
	done
}

main(){	
	init_zookeeper;
	init_kafka;
	init_es;
	init_minio;
	init_doris_be;
	init_doris_fe;
	init_nifi;
	init_flink;
	init_redis; 
	init_prometheus;	
	
}
main

3、生成的配置文件
zookeeper
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/opt/base/zookeeper/data
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
##需要替换的配置
server.1=node01:2888:3888
server.2=node02:2888:3888
server.3=node03:2888:3888

4lw.commands.whitelist=*

kafka
#需要修改的配置
broker.id=1
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/superred/base/kafka/data
num.partitions=5
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
#需要修改的配置
zookeeper.connect=node01:2181,node02:2181,node03:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
#需要修改的配置
advertised.listeners=PLAINTEXT://node01:9092
listeners=PLAINTEXT://:9092
default.replication.factor=3
replica.fetch.max.bytes=5242880
# 测试可保留较长时间数据,设置为100天
log.retention.hours=2400
delete.topic.enable=true
message.max.bytes=1073741824
#禁用自动创建topic
auto.create.topics.enable=false



es
cluster.name: es-cluster
#需要替换的配置
node.name: node01
node.data: true
path.data: /opt/base/elasticsearch/data
path.logs: /opt/base/elasticsearch/logs
network.host: node01
network.publish_host: node01
http.port: 9200
#需要替换的配置
discovery.seed_hosts: ["node01:9300","node02:9300","node03:9300"]
#需要替换的配置
cluster.initial_master_nodes: ["node01","node02"]

flink
env.log.dir: /opt/base/flink/log
env.pid.dir: /opt/base/flink/pid
env.java.opts: -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/heapdump.hprof -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/tmp/gc-%t.log
jobmanager.rpc.address: node01
#jobmanager.rpc.port: 6123
high-availability.jobmanager.port: 6123-6129
taskmanager.rpc.port: 50100-50200
taskmanager.data.port: 6121
blob.server.port: 6130

jobmanager.memory.process.size: 1600m
taskmanager.memory.process.size: 2048m
# taskmanager.memory.flink.size: 1280m
taskmanager.numberOfTaskSlots: 8
parallelism.default: 1
#需要替换的配置
s3.endpoint: http://node01:9000
s3.path.style.access: true
s3.access-key: admin
s3.secret-key: admin123
# fs.default-scheme
high-availability: zookeeper
high-availability.storageDir: s3://flink/ha/
#需要替换的配置
high-availability.zookeeper.quorum: node01:2181,node02:2181,node03:2181

state.backend: filesystem
state.savepoints.dir: s3://flink/savepoints
state.checkpoints.dir: s3://flink/checkpoints
jobmanager.execution.failover-strategy: region
#rest.port: 8081
#rest.bind-port: 8080-8090
classloader.resolve-order: child-first
historyserver.web.port: 8082
historyserver.archive.fs.dir: s3://flink/completed-jobs/
#historyserver.archive.fs.refresh-interval: 10000

# metric
metrics.reporters: prom
metrics.reporter.prom.port: 9249-9259
metrics.reporter.prom.class: org.apache.flink.metrics.prometheus.PrometheusReporter

minio
# Support stand-alone deployment
#需要替换的配置
minio.addr=node01 node02 node03
minio.port=9000
minio.console.port=9001
minio.username=admin
minio.password=admin123
minio.data.dir=/opt/base/minio/data1 /opt/base/minio/data2
minio.boot.sleep=10
nifi
#以上有n多配置就不粘贴了
....
#需要替换的配置
nifi.remote.input.host=node01
nifi.remote.input.secure=false
nifi.remote.input.socket.port=
nifi.remote.input.http.enabled=true
nifi.remote.input.http.transaction.ttl=30 sec
nifi.remote.contents.cache.expiration=30 secs
#需要替换的配置
nifi.web.http.host=node01
nifi.web.http.port=8443
......
#以下有n多配置就不粘贴了
prometheus 整个配置文件都需要使用echo一行一行写入
global:
  scrape_interval: 15s
  evaluation_interval: 15s
alerting:
  alertmanagers:
    - static_configs:
        - targets:
rule_files:
scrape_configs:
  - job_name: 'redis'
    scrape_interval: 10s
    static_configs:
      - targets: ['node01:9121']
        labels:
          instance: redis-01
  - job_name: 'nifi'
    static_configs:
      - targets: ['node03:19092']

  - job_name: "server-data"
    static_configs:
      - targets: ['node01:9101','node02:9101','node03:9101']
  - job_name: "zookeeper"
    scrape_interval: 60s
    metrics_path: '/metrics'
    static_configs:
      - targets: ['node01:9141','node02:9141','node03:9141']
  - job_name: 'kafka'
    static_configs:
      - targets: ['node01:9308','node02:9308','node03:9308']
  - job_name: 'elasticsearch'
    scrape_interval: 60s
    scrape_timeout:  30s
    metrics_path: "/metrics"
    static_configs:
      - targets: []
        labels:
          service: elasticsearch
  - job_name: "minio"
    metrics_path: /minio/v2/metrics/cluster
    scheme: http
    static_configs:
      - targets: ['node01:9000','node02:9000','node03:9000']
  - job_name: 'flink'
    static_configs:
      - targets: ['node01:9249','node02:9249','node03:9249']
        labels:
          group: job_manager
      - targets: ['node01:9250','node02:9250','node03:9250']
        labels:
          group: task_manager
  - job_name: 'doris'
	metrics_path: '/metrics'
    static_configs:
      - targets: ['node01:8030','node02:8030','node03:8030']
        labels:
          group: fe
      - targets: ['node01:8040','node02:8040','node03:8040']
        labels:
          group: be

doris
#只需要把指定网卡的替换掉就行
priority_networks = 192.168.100.115/24

  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值