03linux自动化部署

一键操作其他机器的

vi shell.sh

#!/bin/bash
list="192.168.231.202 192.168.231.203"
username=root
userpwd=root
for host in $list;do
  echo "=====$host===="
  sshpass -p$userpwd ssh -o  StrictHostKeyChecking=no $username@$host $1
done
chmod +x shell.sh
./shell.sh "ls /"
./shell.sh "ls /root"
./shell.sh "mkdir -p /root/a"

创建用户实现免密码登录

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Faf9S9e0-1691334147067)(D:\02正在处理\03linux脚本一键部署\02资料\assets\image-20220123192132802.png)]

vi init_env.sh

#!/bin/bash
# 0 判断我们参数是否小于2
if [ $# -lt 2 ];then
   echo "Usage: sh $0 user_name user_pass"
   exit 1
fi

# 1初始化变量
ROOT_PASS="root"
USER_NAME=$1
USER_PASS=$2
HOST_LIST="192.168.231.201 192.168.231.202 192.168.231.203"

# 2 管理主机本地创建用户 设置密码
useradd $USER_NAME
echo $USER_PASS | passwd --stdin $USER_NAME

# 3 管理主机针对已经创建的用户,生成密钥对
su - $USER_NAME -c "echo "" | ssh-keygen -t rsa"
PUB_KEY="`cat /home/$USER_NAME/.ssh/id_rsa.pub`"

# 4 利用ssh免密码在所有需要管理主机上创建用户 设置密码
# 实现拷贝管理主机到其他机器上
for host in $HOST_LIST;do
   sshpass -p$ROOT_PASS ssh -o  StrictHostKeyChecking=no root@$host "useradd $USER_NAME"
   sshpass -p$ROOT_PASS ssh -o  StrictHostKeyChecking=no root@$host "echo $USER_PASS | passwd --stdin $USER_NAME"
   sshpass -p$ROOT_PASS ssh -o  StrictHostKeyChecking=no root@$host "mkdir /home/$USER_NAME/.ssh -pv"
   sshpass -p$ROOT_PASS ssh -o  StrictHostKeyChecking=no root@$host "echo $PUB_KEY > /home/$USER_NAME/.ssh/authorized_keys"
   sshpass -p$ROOT_PASS ssh -o  StrictHostKeyChecking=no root@$host "chmod 600 /home/$USER_NAME/.ssh/authorized_keys"
   sshpass -p$ROOT_PASS ssh -o  StrictHostKeyChecking=no root@$host "chown -R $USER_NAME:$USER_NAME /home/$USER_NAME/.ssh"
done

chmod +x init_env
./init_env  user2 user2

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-2xyYd81y-1691334147068)(D:\02正在处理\03linux脚本一键部署\02资料\assets\image-20220123193245787.png)]

手动安装zookeeper,kafka

Zookeeper集群运行依赖JAVA环境,因此安装部署前必须首先安装JDK

实验说明:

(1)、构建zookeeper,kafka集群,原则上因该为奇数台。

(2)、使用VMWare Workstation构建3台虚拟机来进行实验,虚拟机建议配置信息:内存大于1G、硬盘大于10G

node01192.168.231.201zookeeper,kafka
node02192.168.231.202zookeeper,kafka
node03192.168.231.203zookeeper,kafka

前提环境

关闭防火墙和selinux

在主机server01、server02、server03执行以下命令操作:

(1)、关闭firewalld

systemctl stop firewalld

(2)、关闭firewalld开机自启动

systemctl disable firewalld

(3)、禁用selinux,临时生效

setenforce 0

(4)、配置文件禁用selinux,永久生效

sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux

准备安装包

JDK:jdk-8u212-linux-x64.tar.gz
Zookeeper:apache-zookeeper-3.7.0-bin.tar.gz
将上述2个压缩包上传到主机node01(192.168.231.201)/root目录下

安装JDK

使用XShell或CRT等软件通过SSH连接到3个节点主机后台:

在主机node01(192.168.231.201)执行以下命令操作:

(1)、创建目录:mkdir /opt/source –pv

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-kHNzCbV3-1691334147068)(D:\02正在处理\03linux脚本一键部署\02资料\assets\image-20220124094112229.png)]

(2)、解压JDK压缩包到/opt/source目录

tar -zxvf jdk-8u212-linux-x64.tar.gz -C /opt/source

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-22m0h8BV-1691334147068)(D:\02正在处理\03linux脚本一键部署\02资料\assets\image-20220124094132928.png)]

(3)、配置JAVA环境变量,将以下内容写入/etc/profile.d/java.sh

export JAVA_HOME=/opt/source/jdk1.8.0_212

export PATH= P A T H : PATH: PATH:JAVA_HOME/bin:$JAVA_HOME/jre/bin

export JAVA_HOME PATH

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-10Y6RCIo-1691334147068)(D:\02正在处理\03linux脚本一键部署\02资料\assets\image-20220124094158021.png)]

(4)、source /etc/profile.d/java.sh

(5)、验证节点node01上JDK是否安装成功,执行:which java

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Lm2m7Xk3-1691334147069)(D:\02正在处理\03linux脚本一键部署\02资料\assets\image-20220124094225292.png)]

切换到node02和node03上执行以下命令:

(6)、mkdir /opt/source -pv

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-3xb8yQB6-1691334147069)(D:\02正在处理\03linux脚本一键部署\02资料\assets\image-20220124094246849.png)]

切换回node01上执行以下命令:

(7)、拷贝JDK目录、JAVA环境变量文件/etc/profile.d/java.sh到node02和node03

scp -r /opt/source/jdk1.8.0_212/ root@node02:/opt/source/

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-clqWzjXF-1691334147069)(D:\02正在处理\03linux脚本一键部署\02资料\assets\image-20220124094312440.png)]

scp -r /opt/source/jdk1.8.0_212/ root@node03:/opt/source/

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-JNDTbGcS-1691334147069)(D:\02正在处理\03linux脚本一键部署\02资料\assets\image-20220124094327619.png)]

scp /etc/profile.d/java.sh root@node02:/etc/profile.d/

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-RyFjnuOb-1691334147070)(D:\02正在处理\03linux脚本一键部署\02资料\assets\image-20220124094343309.png)]

scp /etc/profile.d/java.sh root@node03:/etc/profile.d/

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-8uDBgJZT-1691334147070)(D:\02正在处理\03linux脚本一键部署\02资料\assets\image-20220124094359218.png)]

切换到node02和node03上执行以下命令:

​ (8)、source /etc/profile.d/java.sh

​ (9)、验证node02和node03节点上JDK是否安装成功,执行:which java

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-uBkNjV3j-1691334147070)(D:\02正在处理\03linux脚本一键部署\02资料\assets\image-20220124094415258.png)]

安装zookeeper

在主机node01(192.168.231.201)执行以下命令操作:

(1)、解压zookeeper安装包到目录/opt/source

tar -zxvf apache-zookeeper-3.7.0-bin.tar.gz -C /opt/source/

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-y9SdCGQ7-1691334147070)(D:\02正在处理\03linux脚本一键部署\02资料\assets\image-20220124094446348.png)]

(2)、创建zookeeper软连接

ln -sv /opt/source/apache-zookeeper-3.7.0-bin/ /opt/source/zookeeper

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-fvX2Cau2-1691334147071)(D:\02正在处理\03linux脚本一键部署\02资料\assets\image-20220124094507293.png)]

(3)、拷贝zookeeper安装目录到节点node02和node03

scp -r apache-zookeeper-3.7.0-bin/ root@node02:/opt/source/

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-4a1yGZ8u-1691334147071)(D:\02正在处理\03linux脚本一键部署\02资料\assets\image-20220124094520428.png)]

scp -r apache-zookeeper-3.7.0-bin/ root@node03:/opt/source/

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-tBsZIykU-1691334147071)(D:\02正在处理\03linux脚本一键部署\02资料\assets\image-20220124094532790.png)]

切换到node02和node03节点后台执行以下命令:

(4)、创建zookeeper软连

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-lVGIw71F-1691334147071)(D:\02正在处理\03linux脚本一键部署\02资料\assets\image-20220124094549729.png)]

修改zookeeper配置文件

在node01、node02、node03上修改配置文件内容(三节点均需要执行):

(1)、切换到zookeeper配置文件目录

cd /opt/source/zookeeper/conf/

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-KFApn24a-1691334147071)(D:\02正在处理\03linux脚本一键部署\02资料\assets\image-20220124094623435.png)]

(2)、复制配置文件

cp zoo_sample.cfg zoo.cfg

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-wLO5AkUS-1691334147072)(D:\02正在处理\03linux脚本一键部署\02资料\assets\image-20220124094639780.png)]

(3)、配置文件zoo.cfg末尾增加以下内容:

server.1=192.168.231.201:2888:3888

server.2=192.168.231.202:2888:3888

server.3=192.168.231.203:2888:3888

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-lQWW1j6P-1691334147072)(D:\02正在处理\03linux脚本一键部署\02资料\assets\image-20220124094653762.png)]

(4)、创建zookeeper数据目录

mkdir /data/zk -pv

(5)、修改配置文件zoo.cfg数据目录为/data/zk

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-neui6u5n-1691334147072)(D:\02正在处理\03linux脚本一键部署\02资料\assets\image-20220124094706824.png)]

(6)、在node01、node02、node03创建不同的myid文件

node01执行:echo 1 > /data/zk/myid

node02执行:echo 2 > /data/zk/myid

node03执行:echo 3 > /data/zk/myid

启动zookeeper服务端程序

在node01、node02、node03节点上启动程序(3个节点均需要执行):

/opt/source/zookeeper/bin/zkServer.sh start

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-rd5ISiaW-1691334147072)(D:\02正在处理\03linux脚本一键部署\02资料\assets\image-20220124094743390.png)]

验证集群服务运行状态

在node01、node02、node03节点上执行:

/opt/source/zookeeper/bin/zkServer.sh status

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-QLOBiOQG-1691334147073)(assets/image-20220124094925200-16429889656491.png)]

如上截图,执行指令返回结果,有1个节点为leader,其余2节点均为follower,则代表3节点分布式zookeeper集群部署成功。

安装kafka

Scala:scala-2.12.11.tgz

Kafka:kafka_2.12-2.6.1.tgz

将上述2个压缩包上传到主机node01(192.168.231.201)上/root目录下

配置Scala环境

在主机node01(192.168.231.201)执行以下命令操作:

(1)、解压Scala安装包到目录/opt/source

tar -zxvf scala-2.12.11.tgz -C /opt/source/

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-N5efEjaN-1691334147073)(assets/image-20220124100756346.png)]

2)、配置环境变量文件/etc/profile.d/scala.sh

vim /etc/profile.d/scala.sh

export SCALA_HOME=/opt/source/scala-2.12.11

export PATH=$PATH:$SCALA_HOME/bin

export SCALA_HOME PATH

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-AqvGUH0M-1691334147073)(assets/image-20220124100826138.png)]

(3)、拷贝Scala安装目录到节点node02和node03

scp -r /opt/source/scala-2.12.11/ root@node02:/opt/source

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-MKf1dF0H-1691334147073)(assets/image-20220124100841009.png)]

scp -r /opt/source/scala-2.12.11/ root@node03:/opt/source

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-4zmQ8v2G-1691334147073)(assets/image-20220124100855071.png)]

(4)、拷贝/etc/profile.d/scala.sh文件到node02和node03

scp /etc/profile.d/scala.sh root@node02:/etc/profile.d/

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-LEdNPrfd-1691334147074)(assets/image-20220124100908392.png)]

scp /etc/profile.d/scala.sh root@node03:/etc/profile.d/

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-1ajAEp6g-1691334147074)(assets/image-20220124100930678.png)]

在主机node01、node02、node03执行以下命令操作:

(5)、在node01、node02、node03生效scala环境变量

source /etc/profile.d/scala.sh

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-XvRvVzwO-1691334147074)(assets/image-20220124100950783.png)]

(6)、验证scala环境是否配置成功

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-9cIxCLKp-1691334147074)(assets/image-20220124101002485.png)]

安装配置Kafka

在主机node01(192.168.231.201)执行以下命令操作:

(1)、解压Kafka安装包到/opt/source目录

tar -zxvf kafka_2.12-2.6.1.tgz -C /opt/source/

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-43mXQ0al-1691334147074)(assets/image-20220124101132231.png)]

(2)、复制kafka目录到node02和node03节点

scp -r /opt/source/kafka_2.12-2.6.1/ root@node02:/opt/source/

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-QHviDVsS-1691334147075)(assets/image-20220124101150571.png)]

scp -r /opt/source/kafka_2.12-2.6.1/ root@node03:/opt/source/

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-iE5HZ6vz-1691334147075)(assets/image-20220124101411808.png)]

在主机node01、node02、node03均需要执行以下命令:

(3)、创建kafka家目录软连接

cd /opt/source/

ln -sv kafka_2.12-2.6.1/ kafka

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-p8s50YhO-1691334147075)(assets/image-20220124101432434.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-zSi9tzwr-1691334147075)(assets/image-20220124101441457.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-VBxcoxas-1691334147075)(assets/image-20220124101450497.png)]

(4)、创建kafka数据目录

mkdir /data/kafka/log -pv

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Yjbfs5Ue-1691334147076)(assets/image-20220124101505153.png)]

(5)、修改Kafka配置文件server.properties,需修改4处

第1处修改zookeeper.connect信息,三节点修改内容一致均需要修改,如下图所示

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Fv7UaTEH-1691334147076)(assets/image-20220124101521700.png)]

第二处修改broker.id,三节点必须保持不一致,例如将:

node01:broker.id=100

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-nYPQulaK-1691334147076)(assets/image-20220124101533941.png)]

node02:broker.id=101

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-XasW4VxX-1691334147076)(assets/image-20220124101546114.png)]

node03:broker.id=102

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-pxMoqsdb-1691334147076)(assets/image-20220124101603089.png)]

第3处修改监听接口地址,三节点不一致,修改为节点实际IP即可,如下图所示:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-r57tAURT-1691334147077)(assets/image-20220124101620256.png)]

第4处修改kafka数据目录配置log.dirs为/data/kafka/log,三节点保持一致均需要修改,如下图所示:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-TQ76aBnP-1691334147077)(assets/image-20220124101634941-16429905953102.png)]

启动kafka服务端程序

在node01、node02、node03节点上启动程序(3个节点均需要执行):

cd /opt/source/kafka/bin/

./kafka-server-start.sh -daemon …/config/server.properties

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-YyM0d3KT-1691334147077)(assets/image-20220124101704276.png)]

验证kafka集群运行状态

创建一个测试topic,名称为test,5个分区,2个副本

cd /opt/source/kafka/bin/

./kafka-topics.sh --zookeeper localhost --create --topic test --partitions 5 --replication-factor 2

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-CKwyNWJh-1691334147077)(assets/image-20220124102057274.png)]

如上图所示输出“Create topic test”代表创建成功

查看test的topic分区所在节点详细信息

./kafka-topics.sh --zookeeper localhost --topic test --describe

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-WPe7eCOU-1691334147077)(assets/image-20220124102122859.png)]

如上截图,指令返回结果圈中的Replications和ISR均包含了100、102、102,则代表整个kafka集群部署成功.

一键部署脚本

#!/bin/bash
#

if [ -e ./deploy_kafka.log ];then
	rm -f ./deploy_kafka.log
fi

set -e
exec 1>> ./deploy_kafka.log 2>&1

# 初始化变量
HOST_LIST="192.168.231.201 192.168.231.202 192.168.231.203"
CMD_NUM=0
LOCAL_DIR="/opt/tmp"
PACKAGE_DIR="/opt/package"
APP_DIR="/opt/source"
JDK_NAME="jdk-8u212-linux-x64.tar.gz"
ZK_NAME="apache-zookeeper-3.7.0-bin.tar.gz"
SCALA_NAME="scala-2.12.11.tgz"
KAFKA_NAME="kafka_2.12-2.6.1.tgz"

# 多主机执行指令函数封装
function remote_execute
{
	for host in $HOST_LIST;do
		CMD_NUM=`expr $CMD_NUM + 1`
		echo "+++++++++++++++Execute Command < $@ > ON Host: $host+++++++++++++++"
		ssh -o StrictHostKeyChecking=no root@$host $@
		if [ $? -eq 0 ];then
			echo "$CMD_NUM  Congratulation.Command < $@ > execute success"
		else
			echo "$CMD_NUM  Sorry.Command < $@ > execute failed"
		fi
	done
}

# 多主机传输文件函数封装
function remote_transfer
{
	SRC_FILE=$1
	DST_DIR=$2
	# 函数必须有2个参数,第一个参数是本地文件或目录,第二个参数为远端主机目录
	if [ $# -lt 2 ];then
		echo "Usage: $0 <file|dir> <dst_dir>"
		exit 1
	fi

	# 判断第1个参数是否存在,如果不存在则直接退出并提示给用户
	if [ ! -e $SRC_FILE ];then
		echo "ERROR - $SRC_FILE is not exist,Please check..."
		exit 1
	fi


	for host in $HOST_LIST;do
		echo "+++++++++++++++Transfer File To HOST: $host+++++++++++++++"
		CMD_NUM=`expr $CMD_NUM + 1`
		# 判断第2个参数,远端主机目录是否存在;如果不存在,则创建
		ssh -o StrictHostKeyChecking=no root@$host "if [ ! -e $DST_DIR ];then mkdir $DST_DIR -p;fi"
		
		scp -r -o StrictHostKeyChecking=no $SRC_FILE root@$host:$DST_DIR/
		if [ $? -eq 0 ];then
			echo "Remote Host: $host - $CMD_NUM - INFO - SCP $SRC_FILE To dir $DST_DIR Success"
		else
			echo "Remote Host: $host - $CMD_NUM - ERROR - SCP $SRC_FILE To dir $DST_DIR Failed"
		fi
	done

}


# 第一步:关闭firewalld和selinux
remote_execute "systemctl stop firewalld"
remote_execute "systemctl disable firewalld"
#remote_execute "setenforce 0"
#remote_execute "sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux"



# 第二步:安装配置JDK
remote_transfer $LOCAL_DIR/$JDK_NAME $PACKAGE_DIR
remote_execute "if [ ! -d $APP_DIR ];then mkdir -p $APP_DIR;fi"
remote_execute "tar zxvf $PACKAGE_DIR/$JDK_NAME -C $APP_DIR"

cat >> /etc/profile << EOF
export JAVA_HOME=/opt/source/jdk1.8.0_212
export PATH=\$PATH:\$JAVA_HOME/bin:\$JAVA_HOME/jre/bin
EOF

remote_transfer /etc/profile /etc/
remote_execute "source /etc/profile"
remote_execute "java -version"


# 第三步:安装配置zookeeper,并启动服务

remote_transfer $LOCAL_DIR/$ZK_NAME $PACKAGE_DIR
remote_execute "tar zxvf $PACKAGE_DIR/$ZK_NAME -C $APP_DIR"

remote_execute "if [ -e $APP_DIR/zookeeper ];then rm -f $APP_DIR/zookeeper;fi"
remote_execute "ln -sv $APP_DIR/apache-zookeeper-3.7.0-bin $APP_DIR/zookeeper"

remote_execute "cp $APP_DIR/zookeeper/conf/zoo_sample.cfg $APP_DIR/zookeeper/conf/zoo.cfg"

cat > $LOCAL_DIR/zoo_tmp.conf << EOF
server.1=192.168.231.201:2888:3888
server.2=192.168.231.202:2888:3888
server.3=192.168.231.203:2888:3888
EOF

remote_transfer $LOCAL_DIR/zoo_tmp.conf /tmp
remote_execute "cat /tmp/zoo_tmp.conf >> $APP_DIR/zookeeper/conf/zoo.cfg"

remote_execute "if [ -e /data/zk ];then rm -rf /data/zk;fi"
remote_execute "mkdir /data/zk -p"
remote_execute "sed -i 's/dataDir=\/tmp\/zookeeper/dataDir=\/data\/zk/g' $APP_DIR/zookeeper/conf/zoo.cfg"

remote_execute 'if [ `hostname` == "server01" ];then echo 1 > /data/zk/myid;fi'
remote_execute 'if [ `hostname` == "server02" ];then echo 2 > /data/zk/myid;fi'
remote_execute 'if [ `hostname` == "server03" ];then echo 3 > /data/zk/myid;fi'

remote_execute "jps | grep QuorumPeerMain | grep -v grep | awk '{print \$1}' > /tmp/zk.pid"
remote_execute 'if [ -s /tmp/zk.pid ];then kill -9 `cat /tmp/zk.pid`;fi'
remote_execute "$APP_DIR/zookeeper/bin/zkServer.sh start"


# 第四步:安装配置scala环境
remote_transfer $LOCAL_DIR/$SCALA_NAME $PACKAGE_DIR
remote_execute "tar zxvf $PACKAGE_DIR/$SCALA_NAME -C $APP_DIR"

cat >> /etc/profile << EOF
export SCALA_HOME=$APP_DIR/scala-2.12.11
export PATH=\$PATH:\$SCALA_HOME/bin
EOF

remote_transfer /etc/profile /etc
remote_execute "source /etc/profile"
#remote_execute "scala -version"


# 第五步:安装配置kafka,并启动服务

remote_transfer $LOCAL_DIR/$KAFKA_NAME $PACKAGE_DIR
remote_execute "tar zxvf $PACKAGE_DIR/$KAFKA_NAME -C $APP_DIR"

remote_execute "if [ -e $APP_DIR/kafka ];then rm -rf $APP_DIR/kafka;fi"
remote_execute "ln -sv $APP_DIR/kafka_2.12-2.6.1 $APP_DIR/kafka"

remote_execute "if [ -e /data/kafka/log ];then rm -rf /data/kafka/log;fi"
remote_execute "mkdir -p /data/kafka/log"

remote_execute "sed -i '/zookeeper.connect=localhost:2181/d' $APP_DIR/kafka/config/server.properties"
remote_execute "sed -i '\$azookeeper.connect=192.168.231.201:2181,192.168.231.202:2181,192.168.231.203:2181' $APP_DIR/kafka/config/server.properties"

remote_execute "if [ \`hostname\` == "server01" ];then sed -i 's/broker.id=0/broker.id=100/g' $APP_DIR/kafka/config/server.properties;fi"
remote_execute "if [ \`hostname\` == "server02" ];then sed -i 's/broker.id=0/broker.id=101/g' $APP_DIR/kafka/config/server.properties;fi"
remote_execute "if [ \`hostname\` == "serve03r" ];then sed -i 's/broker.id=0/broker.id=102/g' $APP_DIR/kafka/config/server.properties;fi"


remote_execute "if [ \`hostname\` == "server01" ];then sed -i '\$alisteners=PLAINTEXT://192.168.231.201:9092' $APP_DIR/kafka/config/server.properties;fi"
remote_execute "if [ \`hostname\` == "server02" ];then sed -i '\$alisteners=PLAINTEXT://192.168.231.202:9092' $APP_DIR/kafka/config/server.properties;fi"
remote_execute "if [ \`hostname\` == "server03" ];then sed -i '\$alisteners=PLAINTEXT://192.168.231.203:9092' $APP_DIR/kafka/config/server.properties;fi"

remote_execute "sed -i 's/log.dirs=\/tmp\/kafka-logs/log.dirs=\/data\/kafka\/log/g' $APP_DIR/kafka/config/server.properties"

remote_execute "jps | grep Kafka | grep -v grep | awk '{print \$1}' > /tmp/kafka.pid"
remote_execute "if [ -s /tmp/kafka.pid ];then kill -9 \`cat /tmp/kafka.pid\`;fi"

remote_execute "$APP_DIR/kafka/bin/kafka-server-start.sh -daemon $APP_DIR/kafka/config/server.properties"

sleep 30

remote_execute "if [ \`hostname\` == "server01" ];then $APP_DIR/kafka/bin/kafka-topics.sh --zookeeper localhost --create --topic test --partitions 5 --replication-factor 2;fi"

sleep 5

remote_execute "if [ \`hostname\` == "server01" ];then $APP_DIR/kafka/bin/kafka-topics.sh --zookeeper localhost --topic test --describe;fi"

免密码登录

cat ssh_auto.sh 
#!/bin/bash
# 1初始化变量
ROOT_PASS="root"
HOST_LIST="192.168.231.201 192.168.231.202 192.168.231.203"

# 2 管理主机针对已经创建的用户,生成密钥对
[ ! -f /root/.ssh/id_rsa.pub ] && ssh-keygen -t rsa -P '' &>/dev/null
PUB_KEY="`cat /root/.ssh/id_rsa.pub`"

# 3 利用ssh免密码在所有需要管理主机上创建用户 设置密码
# 实现拷贝管理主机到其他机器上
for host in $HOST_LIST;do
   sshpass -p$ROOT_PASS ssh -o  StrictHostKeyChecking=no root@$host "mkdir /root/.ssh/ -pv"
   sshpass -p$ROOT_PASS ssh -o  StrictHostKeyChecking=no root@$host "echo $PUB_KEY > /root/.ssh/authorized_keys"
   sshpass -p$ROOT_PASS ssh -o  StrictHostKeyChecking=no root@$host "chmod 600 /root/.ssh/authorized_keys"
   sshpass -p$ROOT_PASS ssh -o  StrictHostKeyChecking=no root@$host "chown -R root:root /root/.ssh"
done

实现一键永久关闭firewalld和selinux

#!/bin/bash
#
if [ -e ./deploy_kafka.log ];then
	rm -f ./deploy_kafka.log
fi

exec 1>> ./deploy_kafka.log 2>&1

# 初始化变量
HOST_LIST="192.168.231.201 192.168.231.202 192.168.231.203"
CMD_NUM=0

# 多主机执行指令函数封装
function remote_execute
{
	for host in $HOST_LIST;do
		CMD_NUM=`expr $CMD_NUM + 1`
		echo "+++++++++++++++Execute Command < $@ > ON Host: $host+++++++++++++++"
		ssh -o StrictHostKeyChecking=no root@$host $@
		if [ $? -eq 0 ];then
			echo "$CMD_NUM  Congratulation.Command < $@ > execute success"
		else
			echo "$CMD_NUM  Sorry.Command < $@ > execute failed"
		fi
	done
}


# 第一步:关闭firewalld和selinux
remote_execute "systemctl stop firewalld"
remote_execute "systemctl disable firewalld"
remote_execute "setenforce 0"
remote_execute "sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux"

实现多主机文件拷贝

LOCAL_DIR="/opt/tmp"
PACKAGE_DIR="/opt/package"
APP_DIR="/opt/source"
JDK_NAME="jdk-8u212-linux-x64.tar.gz"


# 多主机传输文件函数封装
function remote_transfer
{
	SRC_FILE=$1
	DST_DIR=$2
	# 函数必须有2个参数,第一个参数是本地文件或目录,第二个参数为远端主机目录
	if [ $# -lt 2 ];then
		echo "Usage: $0 <file|dir> <dst_dir>"
		exit 1
	fi

	# 判断第1个参数是否存在,如果不存在则直接退出并提示给用户
	if [ ! -e $SRC_FILE ];then
		echo "ERROR - $SRC_FILE is not exist,Please check..."
		exit 1
	fi


	for host in $HOST_LIST;do
		echo "+++++++++++++++Transfer File To HOST: $host+++++++++++++++"
		CMD_NUM=`expr $CMD_NUM + 1`
		# 判断第2个参数,远端主机目录是否存在;如果不存在,则创建
		ssh -o StrictHostKeyChecking=no root@$host "if [ ! -e $DST_DIR ];then mkdir $DST_DIR -p;fi"
		
		scp -r -o StrictHostKeyChecking=no $SRC_FILE root@$host:$DST_DIR/
		if [ $? -eq 0 ];then
			echo "Remote Host: $host - $CMD_NUM - INFO - SCP $SRC_FILE To dir $DST_DIR Success"
		else
			echo "Remote Host: $host - $CMD_NUM - ERROR - SCP $SRC_FILE To dir $DST_DIR Failed"
		fi
	done

}

# 第二步:安装配置JDK
remote_transfer $LOCAL_DIR/$JDK_NAME $PACKAGE_DIR
remote_execute "if [ ! -d $APP_DIR ];then mkdir -p $APP_DIR;fi"
remote_execute "tar zxvf $PACKAGE_DIR/$JDK_NAME -C $APP_DIR"

cat >> /etc/profile << EOF
export JAVA_HOME=/opt/source/jdk1.8.0_212
export PATH=\$PATH:\$JAVA_HOME/bin:\$JAVA_HOME/jre/bin
EOF

remote_transfer /etc/profile /etc/
remote_execute "source /etc/profile"
remote_execute "java -version"

安装配置zookeeper代码实现

步骤一:管理主机传输zookeeper安装包到三节点
步骤二:解压安装包
tar -zxvf apache-zookeeper-3.7.0-bin.tar.gz -C /opt/source/
步骤三:创建zookeeper软连接
ln -sv /opt/source/apache-zookeeper-3.7.0-bin/ /opt/source/zookeeper
步骤四:修改zookeeper配置文件
	(1)、样例配置文件更名
		cp zoo_sample.cfg zoo.cfg
	(2)、配置文件zoo.cfg末尾增加内容
		server.1=192.168.231.201:2888:3888
		server.2=192.168.231.202:2888:3888
		server.3=192.168.231.203:2888:3888
	(3)、创建zookeeper数据目录
		mkdir /data/zk -pv
	(4)、配置文件数据目录更改
		dataDir=/tmp/zookeeper更改为dataDir=/data/zk
	(5)、数据目录生成唯一的myid文件
		node01:echo 1 > /data/zk/myid
		node02:echo 2 > /data/zk/myid
		node03:echo 3 > /data/zk/myid
	
步骤五:启动服务并验证
/opt/source/zookeeper/bin/zkServer.sh start
/opt/source/zookeeper/bin/zkServer.sh status
ZK_NAME="apache-zookeeper-3.7.0-bin.tar.gz"

# 第三步:安装配置zookeeper,并启动服务
remote_transfer $LOCAL_DIR/$ZK_NAME $PACKAGE_DIR
remote_execute "tar zxvf $PACKAGE_DIR/$ZK_NAME -C $APP_DIR"

remote_execute "if [ -e $APP_DIR/zookeeper ];then rm -f $APP_DIR/zookeeper;fi"
remote_execute "ln -sv $APP_DIR/apache-zookeeper-3.7.0-bin $APP_DIR/zookeeper"

remote_execute "cp $APP_DIR/zookeeper/conf/zoo_sample.cfg $APP_DIR/zookeeper/conf/zoo.cfg"

cat > $LOCAL_DIR/zoo_tmp.conf << EOF
server.1=192.168.231.201:2888:3888
server.2=192.168.231.202:2888:3888
server.3=192.168.231.203:2888:3888
EOF

remote_transfer $LOCAL_DIR/zoo_tmp.conf /tmp
remote_execute "cat /tmp/zoo_tmp.conf >> $APP_DIR/zookeeper/conf/zoo.cfg"

remote_execute "if [ -e /data/zk ];then rm -rf /data/zk;fi"
remote_execute "mkdir /data/zk -p"
remote_execute "sed -i 's/dataDir=\/tmp\/zookeeper/dataDir=\/data\/zk/g' $APP_DIR/zookeeper/conf/zoo.cfg"

remote_execute 'if [ `hostname` == "server01" ];then echo 1 > /data/zk/myid;fi'
remote_execute 'if [ `hostname` == "server02" ];then echo 2 > /data/zk/myid;fi'
remote_execute 'if [ `hostname` == "server03" ];then echo 3 > /data/zk/myid;fi'

remote_execute "jps | grep QuorumPeerMain | grep -v grep | awk '{print \$1}' > /tmp/zk.pid"
remote_execute 'if [ -s /tmp/zk.pid ];then kill -9 `cat /tmp/zk.pid`;fi'
remote_execute "$APP_DIR/zookeeper/bin/zkServer.sh start"

安装scala

安装配置scala环境步骤拆解
第一步:管理主机(/opt/tmp)传输scala安装包到远端生产主机目录(/opt/package)
第二步:远端生产主机解压缩scala安装包到安装目录(/opt/source)
第三步:管理主机(/opt/tmp)本地生成scala环境变量配置文件/etc/proflie
第四步:管理主机传输scala.sh到所有远端生产主机/etc/proflie/下
第五步:所有远端生产主机使用source指令生效scala环境变量
第六步:使用scala -version验证scala环境是否配置成功
# 第四步:安装配置scala环境
remote_transfer $LOCAL_DIR/$SCALA_NAME $PACKAGE_DIR
remote_execute "tar zxvf $PACKAGE_DIR/$SCALA_NAME -C $APP_DIR"

cat > /etc/profile << EOF
export SCALA_HOME=$APP_DIR/scala-2.12.11
export PATH=\$PATH:\$SCALA_HOME/bin
EOF

remote_transfer /etc/profile /etc
remote_execute "source /etc/profile"
remote_execute "scala -version"

安装配置kafka并启动服务代码实现

第一步:管理主机(/opt/tmp)传输kafka安装包到所有远端生产主机/opt/package
第二步:所有远端生产主机(/opt/package)解压缩kafka安装包到安装目录(/opt/source)
第三步:创建kafka_2.12-2.6.1目录软连接,创建之前先判断软连接是否存在,如果存在则删除
第四步:创建kafka数据存储目录/data/kafka/log;创建之前先判断该目录是否存在,如果存在则删除;创建数据目录
第五步:修改kafka配置文件(/opt/source/kafka/config/server.properties)
	(1)、修改内容
		 zookeeper.connect=localhost:2181替换为
		 zookeeper.connect=192.168.231.201:2181,192.168.231.202:2181,192.168.231.203:2181 
	(2)、修改内容
		 broker.id=0修改为:
		 server01:broker.id=100
		 server02:broker.id=101
		 server03:broker.id=102
	(3)、追加内容
		 server01:listeners=PLAINTEXT://192.168.231.201:9092
		 server02:listeners=PLAINTEXT://192.168.231.202:9092
		 server03:listeners=PLAINTEXT://192.168.231.203:9092
	(4)、修改内容
		 log.dirs=/tmp/kafka-logs修改为log.dirs=/data/kafka/log
第六步:启动kafka服务;启动前先判断kafka进程是否存在,如果存在先杀死
第七步:创建测试topic(test),并查看详情以验证kafka环境是否配置成功
# 第五步:安装配置kafka,并启动服务

remote_transfer $LOCAL_DIR/$KAFKA_NAME $PACKAGE_DIR
remote_execute "tar zxvf $PACKAGE_DIR/$KAFKA_NAME -C $APP_DIR"

remote_execute "if [ -e $APP_DIR/kafka ];then rm -rf $APP_DIR/kafka;fi"
remote_execute "ln -sv $APP_DIR/kafka_2.12-2.6.1 $APP_DIR/kafka"

remote_execute "if [ -e /data/kafka/log ];then rm -rf /data/kafka/log;fi"
remote_execute "mkdir -p /data/kafka/log"

remote_execute "sed -i '/zookeeper.connect=localhost:2181/d' $APP_DIR/kafka/config/server.properties"
remote_execute "sed -i '\$azookeeper.connect=192.168.231.201:2181,192.168.231.202:2181,192.168.231.203:2181' $APP_DIR/kafka/config/server.properties"

remote_execute "if [ \`hostname\` == "server01" ];then sed -i 's/broker.id=0/broker.id=100/g' $APP_DIR/kafka/config/server.properties;fi"
remote_execute "if [ \`hostname\` == "server02" ];then sed -i 's/broker.id=0/broker.id=101/g' $APP_DIR/kafka/config/server.properties;fi"
remote_execute "if [ \`hostname\` == "serve03r" ];then sed -i 's/broker.id=0/broker.id=102/g' $APP_DIR/kafka/config/server.properties;fi"


remote_execute "if [ \`hostname\` == "server01" ];then sed -i '\$alisteners=PLAINTEXT://192.168.231.201:9092' $APP_DIR/kafka/config/server.properties;fi"
remote_execute "if [ \`hostname\` == "server02" ];then sed -i '\$alisteners=PLAINTEXT://192.168.231.202:9092' $APP_DIR/kafka/config/server.properties;fi"
remote_execute "if [ \`hostname\` == "server03" ];then sed -i '\$alisteners=PLAINTEXT://192.168.231.203:9092' $APP_DIR/kafka/config/server.properties;fi"

remote_execute "sed -i 's/log.dirs=\/tmp\/kafka-logs/log.dirs=\/data\/kafka\/log/g' $APP_DIR/kafka/config/server.properties"

remote_execute "jps | grep Kafka | grep -v grep | awk '{print \$1}' > /tmp/kafka.pid"
remote_execute "if [ -s /tmp/kafka.pid ];then kill -9 \`cat /tmp/kafka.pid\`;fi"

remote_execute "$APP_DIR/kafka/bin/kafka-server-start.sh -daemon $APP_DIR/kafka/config/server.properties"

sleep 30

remote_execute "if [ \`hostname\` == "server01" ];then $APP_DIR/kafka/bin/kafka-topics.sh --zookeeper localhost --create --topic test --partitions 5 --replication-factor 2;fi"

sleep 5

remote_execute "if [ \`hostname\` == "server01" ];then $APP_DIR/kafka/bin/kafka-topics.sh --zookeeper localhost --topic test --describe;fi"

一键启停

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-OhYZL4LG-1691334147078)(assets/image-20220124151329171.png)]

实现的框架

#!/bin/bash
#

function service_start
{
}

function service_stop
{
}

function service_status
{
}

case $1 in
	start)
		....
		;;
	stop)
		....
		;;
	status)
		....
		;;
	*)
		....
		;;
esac

kafka一键启停

#!/bin/bash
#

HOST_LIST="192.168.231.201 192.168.231.202 192.168.231.203"

function service_status
{

	status_idx=0
	result=0
	while [ $status_idx -lt 3 ];do
		ssh -o StrictHostKeyChecking=no $1 "jps | grep -w Kafka" &> /dev/null
		if [ $? -eq 0 ];then
			result=`expr $result + 1`
		fi
		status_idx=`expr $status_idx + 1`
	done

	if [ $result -eq 3 ];then
		return
	fi
	return 99

}


function service_start
{
	for host in $HOST_LIST;do
		echo "------Now Begin To Start Kafka In Host:$host------"
		service_status $host
		if [ $? -eq 0 ];then
			echo "Kafka broker in $host is already RUNNING"
		else
			echo "Now Kafka broker is STOPPED,Start it...."
			ssh -o StrictHostKeyChecking=no $host "/opt/source/kafka/bin/kafka-server-start.sh -daemon /opt/source/kafka/config/server.properties" &> /dev/null
			index=0
			while [ $index -lt 5 ];do
				service_status $host
				if [ $? -ne 0 ];then
					index=`expr $index + 1`
					echo "$index Times: Kafka broker in $host start failed...Please wait..."
					echo "After 3 seconds will check kafka status again..."
					sleep 3
					continue
				else
					echo "OK...Kafka broker in $host is RUNNING..."
					break
				fi
			done
			if [ $index -eq 5 ];then
				echo "Sorry...Kafka broker Start Failed..Please login $host to check"
			fi
		fi
	done

}

function service_stop
{
	for host in $HOST_LIST;do
		echo "------Now Begin To Stop Kafka In Host:$host------"
		service_status $host
		if [ $? -ne 0 ];then
			echo "Kafka broker in $host is already STOPPED"
		else
			echo "Now Kafka broker is RUNNING,Stop it...."
			ssh -o StrictHostKeyChecking=no $host "/opt/source/kafka/bin/kafka-server-stop.sh" &> /dev/null
			index=0
			while [ $index -lt 5 ];do
				service_status $host
				if [ $? -eq 0 ];then
					index=`expr $index + 1`
					echo "$index Times: Kafka broker in $host is stopping...Please wait..."
					echo "After 3 seconds will check kafka status again..."
					sleep 5
					continue
				else
					echo "OK...Kafka broker in $host is STOPPPED now..."
					break
				fi
			done
			if [ $index -eq 5 ];then
				echo "Sorry...Kafka broker Stop Failed..Please login $host to check"
			fi
		fi
	done

}


case $1 in
	start)
		service_start
		;;
	stop)
		service_stop
		;;
	status)
		for host in $HOST_LIST;do
			echo "--------Now Begin To Detect Kakfa Status In Host: $host--------"
			service_status $host
			if [ $? -eq 0 ];then
				echo "Kafka broker in $host is RUNNING"
			else
				echo "Kafka broker in $host is STOPPED"
			fi
		done	
		;;
	*)
		;;
esac

zk 一键启停

#!/bin/bash
#

HOST_LIST="192.168.231.201 192.168.231.202 192.168.231.203"

STATUS_CMD="jps | grep -w QuorumPeerMain"
START_CMD="/opt/source/zookeeper/bin/zkServer.sh start"
STOP_CMD="/opt/source/zookeeper/bin/zkServer.sh stop"
SERVICE_NAME="ZK Server"

function service_status
{

	status_idx=0
	result=0
	while [ $status_idx -lt 3 ];do
		ssh -o StrictHostKeyChecking=no $1 $STATUS_CMD &> /dev/null
		if [ $? -eq 0 ];then
			result=`expr $result + 1`
		fi
		status_idx=`expr $status_idx + 1`
	done

	if [ $result -eq 3 ];then
		return
	fi
	return 99

}


function service_start
{
	for host in $HOST_LIST;do
		echo "------Now Begin To Start $SERVICE_NAME In Host:$host------"
		service_status $host
		if [ $? -eq 0 ];then
			echo "$SERVICE_NAME in $host is already RUNNING"
		else
			echo "Now $SERVICE_NAME is STOPPED,Start it...."
			ssh -o StrictHostKeyChecking=no $host $START_CMD &> /dev/null 
			index=0
			while [ $index -lt 5 ];do
				service_status $host
				if [ $? -ne 0 ];then
					index=`expr $index + 1`
					echo "$index Times: $SERVICE_NAME in $host start failed...Please wait..."
					echo "After 3 seconds will check kafka status again..."
					sleep 3
					continue
				else
					echo "OK...$SERVICE_NAME in $host is RUNNING..."
					break
				fi
			done
			if [ $index -eq 5 ];then
				echo "Sorry...$SERVICE_NAME Start Failed..Please login $host to check"
			fi
		fi
	done

}

function service_stop
{
	for host in $HOST_LIST;do
		echo "------Now Begin To Stop $SERVICE_NAME In Host:$host------"
		service_status $host
		if [ $? -ne 0 ];then
			echo "$SERVICE_NAME in $host is already STOPPED"
		else
			echo "Now $SERVICE_NAME is RUNNING,Stop it...."
			ssh -o StrictHostKeyChecking=no $host $STOP_CMD &> /dev/null
			index=0
			while [ $index -lt 5 ];do
				service_status $host
				if [ $? -eq 0 ];then
					index=`expr $index + 1`
					echo "$index Times: $SERVICE_NAME in $host is stopping...Please wait..."
					echo "After 3 seconds will check kafka status again..."
					sleep 3
					continue
				else
					echo "OK...$SERVICE_NAME in $host is STOPPPED now..."
					break
				fi
			done
			if [ $index -eq 5 ];then
				echo "Sorry...$SERVICE_NAME Stop Failed..Please login $host to check"
			fi
		fi
	done

}


function usage
{

cat << EOF
Usage 1: sh $0 start          # Start Kafka Process Define IN HOST_LIST
Usage 2: sh $0 stop           # Stop Kafka Process Define IN HOST_LIST
Usage 3: sh $0 status         # Get Kafka Status Define IN HOST_LIST
EOF

}

case $1 in
	start)
		service_start
		;;
	stop)
		service_stop
		;;
	status)
		for host in $HOST_LIST;do
			echo "--------Now Begin To Detect $SERVICE_NAME Status In Host: $host--------"
			service_status $host
			if [ $? -eq 0 ];then
				echo "$SERVICE_NAME in $host is RUNNING"
			else
				echo "$SERVICE_NAME in $host is STOPPED"
			fi
		done	
		;;
	*)
		usage
		;;
esac

nginx一键起停

#!/bin/bash
#

HOST_LIST="192.168.231.201 192.168.231.202 192.168.231.203"
STATUS_CMD="ps -ef| grep nginx | grep -v grep"
START_CMD="/usr/local/nginx/sbin/nginx"
STOP_CMD="/usr/local/nginx/sbin -s stop"
SERVICE_NAME="Nginx"

function service_status
{

	status_idx=0
	result=0
	while [ $status_idx -lt 3 ];do
		ssh -o StrictHostKeyChecking=no $1 $STATUS_CMD &> /dev/null
		if [ $? -eq 0 ];then
			result=`expr $result + 1`
		fi
		status_idx=`expr $status_idx + 1`
	done

	if [ $result -eq 3 ];then
		return
	fi
	return 99

}


function service_start
{
	for host in $HOST_LIST;do
		echo "------Now Begin To Start $SERVICE_NAME In Host:$host------"
		service_status $host
		if [ $? -eq 0 ];then
			echo "$SERVICE_NAME in $host is already RUNNING"
		else
			echo "Now $SERVICE_NAME is STOPPED,Start it...."
			ssh -o StrictHostKeyChecking=no $host $START_CMD &> /dev/null 
			index=0
			while [ $index -lt 5 ];do
				service_status $host
				if [ $? -ne 0 ];then
					index=`expr $index + 1`
					echo "$index Times: $SERVICE_NAME in $host start failed...Please wait..."
					echo "After 3 seconds will check kafka status again..."
					sleep 3
					continue
				else
					echo "OK...$SERVICE_NAME in $host is RUNNING..."
					break
				fi
			done
			if [ $index -eq 5 ];then
				echo "Sorry...$SERVICE_NAME Start Failed..Please login $host to check"
			fi
		fi
	done

}

function service_stop
{
	for host in $HOST_LIST;do
		echo "------Now Begin To Stop $SERVICE_NAME In Host:$host------"
		service_status $host
		if [ $? -ne 0 ];then
			echo "$SERVICE_NAME in $host is already STOPPED"
		else
			echo "Now $SERVICE_NAME is RUNNING,Stop it...."
			ssh -o StrictHostKeyChecking=no $host $STOP_CMD &> /dev/null
			index=0
			while [ $index -lt 5 ];do
				service_status $host
				if [ $? -eq 0 ];then
					index=`expr $index + 1`
					echo "$index Times: $SERVICE_NAME in $host is stopping...Please wait..."
					echo "After 3 seconds will check kafka status again..."
					sleep 3
					continue
				else
					echo "OK...$SERVICE_NAME in $host is STOPPPED now..."
					break
				fi
			done
			if [ $index -eq 5 ];then
				echo "Sorry...$SERVICE_NAME Stop Failed..Please login $host to check"
			fi
		fi
	done

}


function usage
{

cat << EOF
Usage 1: sh $0 start          # Start Kafka Process Define IN HOST_LIST
Usage 2: sh $0 stop           # Stop Kafka Process Define IN HOST_LIST
Usage 3: sh $0 status         # Get Kafka Status Define IN HOST_LIST
EOF

}

case $1 in
	start)
		service_start
		;;
	stop)
		service_stop
		;;
	status)
		for host in $HOST_LIST;do
			echo "--------Now Begin To Detect $SERVICE_NAME Status In Host: $host--------"
			service_status $host
			if [ $? -eq 0 ];then
				echo "$SERVICE_NAME in $host is RUNNING"
			else
				echo "$SERVICE_NAME in $host is STOPPED"
			fi
		done	
		;;
	*)
		usage
		;;
esac

is already STOPPED"
else
echo “Now $SERVICE_NAME is RUNNING,Stop it…”
ssh -o StrictHostKeyChecking=no $host $STOP_CMD &> /dev/null
index=0
while [ $index -lt 5 ];do
service_status $host
if [ $? -eq 0 ];then
index=expr $index + 1
echo “$index Times: $SERVICE_NAME in h o s t i s s t o p p i n g . . . P l e a s e w a i t . . . " e c h o " A f t e r 3 s e c o n d s w i l l c h e c k k a f k a s t a t u s a g a i n . . . " s l e e p 3 c o n t i n u e e l s e e c h o " O K . . . host is stopping...Please wait..." echo "After 3 seconds will check kafka status again..." sleep 3 continue else echo "OK... hostisstopping...Pleasewait..."echo"After3secondswillcheckkafkastatusagain..."sleep3continueelseecho"OK...SERVICE_NAME in $host is STOPPPED now…”
break
fi
done
if [ i n d e x − e q 5 ] ; t h e n e c h o " S o r r y . . . index -eq 5 ];then echo "Sorry... indexeq5];thenecho"Sorry...SERVICE_NAME Stop Failed…Please login $host to check"
fi
fi
done

}

function usage
{

cat << EOF
Usage 1: sh $0 start # Start Kafka Process Define IN HOST_LIST
Usage 2: sh $0 stop # Stop Kafka Process Define IN HOST_LIST
Usage 3: sh $0 status # Get Kafka Status Define IN HOST_LIST
EOF

}

case $1 in
start)
service_start
;;
stop)
service_stop
;;
status)
for host in $HOST_LIST;do
echo “--------Now Begin To Detect $SERVICE_NAME Status In Host: $host--------”
service_status $host
if [ ? − e q 0 ] ; t h e n e c h o " ? -eq 0 ];then echo " ?eq0];thenecho"SERVICE_NAME in h o s t i s R U N N I N G " e l s e e c h o " host is RUNNING" else echo " hostisRUNNING"elseecho"SERVICE_NAME in $host is STOPPED"
fi
done
;;
*)
usage
;;
esac


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

加油干sit!

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值