目录
成果物:计划表、制作数据方案、制定接口测试方案(后续流程会随时更新调整)、服务关系调用图
成果物:资源申请单、脚本开发进度表、数据准备的文件及备份脚本
成果物:阶段性报告、会议邀请(报告解读、压测过程中遇到的问题或存在风险)
性能测试流程主要分几个阶段,任务受领阶段、测试规划阶段、测试准备阶段、测试执行与调优阶段、测试收尾阶段
一、任务受领阶段:
成果物:需求评审文档
参加需求评审,任务期间将各版本的评审文档落地归档,没文档也需发邮件将邮件归档
二、测试规划阶段:
成果物:计划表、制作数据方案、制定接口测试方案(后续流程会随时更新调整)、服务关系调用图
计划表2.1
制定数据方案—— 数据关联的依赖调研(详细图)2.2
制定数据方案—— 数据关联的依赖调研(精简图)2.3
制定接口测试方案——业务调用关系调研2.4
服务关系调用图——遇到复杂的业务调用时,需要梳理此图2.5
规划阶段需要像研发、运维等干系人员发送【所制定测试方案计划】邮件
邮件内容包含以上所列的成果物:计划表、数据方案、接口方案、服务调用关系图
三、测试准备阶段:
成果物:资源申请单、脚本开发进度表、数据准备的文件及备份脚本
环境搭建与调整——像运维同步研发所确认的资源申请单,申请资源,资源申请后测试人员需核对资源配置是否正确,并部署压测目录、shell脚本、nmon文件、ssh互信
此处shell脚本主要包含(automon.sh、automonbatch.sh后面会有脚本解释)
资源申请——所需服务资源与研发调研3.1
应用部署与连通性检查——资源配置有效,研发人员部署应用服务并连调环境,测试人员需同步配置信息如Zookeeper
脚本开发与增强——开发接口
脚本开发进度表3.2
批量数据准备——复杂场景需要先预埋数据,shell做好干系库的数据库备份及还原脚本
1、做数据一般通过存储过程或者批量跑接口完成
2、做数据库备份:可以重复利用做的数据或者可以直接还原指定场景所用
备份:mysqldump -h主机名 -P端口 -u用户名 -p密码 --database 数据库名 > 文件名.sql
还原:mysql -uroot -psymdata xinda_product < ${backupfile}
四、测试执行与调优阶段:
成果物:场景统计单、问题记录单
基准测试——一般基准并发点在100tps,同时查看服务日志及资源有没有异样情况
单交易负载测试与调优:
混合负载测试与调优:多场景交叉混合执行(一般用LR工具更方便于设计并执行复杂场景)
成果物:场景执行统计单、问题记录单
场景统计单4.1
问题记录单4.2
五、报告发布与评审:
成果物:阶段性报告、会议邀请(报告解读、压测过程中遇到的问题或存在风险)
阶段性报告主要包含
1、约束:人员/时间上的受限、与线上资源/配置等信息不同步的情况说明、某些场景无法复现(如线上数据无法模拟)
2、监控日志:压测过程中存放的日志路径及错误信息手机的信息,方便开发自行获取
3、业务指标:是上面场景统计单的汇总 如图4.1
4、问题清单:如果是阶段性报告需要实时更新表单 见图4.2
5、系统资源监控:如遇到高位运行的服务,执行的场景关联服务资源过高的各指标展示(如cpu、io读写、磁盘读写等)
6、抗风险预估:一般结合约束关联性阐述
会议邀请
六、测试资产归档:
将以上五大点的成果物落地归档jira
七:脚本说明
#定义nmon主目录
monbase=/server/nmondir
#监控间隔时间
interval=$1
#监控总次数
sum=$2
#定义监控启动时间
monstart_time=`date +%Y%m%d%H%M%S`
#远程服务器ip定义
rip=10.1.15.242
#本地服务器ip获取
if [[ `LC_ALL=C ifconfig | grep 'Bcast' |cut -d: -f2 | awk '{ print $1}' |cut -d'.' -f1,2,3` = "10.100.24" ]] || [[ `LC_ALL=C ifconfig | grep 'Bcast' |cut -d: -f2 | awk '{ print $1}' |cut -d'.' -f1,2,3` = "10.103.51" ]];then
lip=`LC_ALL=C ifconfig | grep 'Bcast' |cut -d: -f2 | awk '{ print $1}'`
else
#10.100.24.85~88服务器操作系统为centos7.0,获取lip方法如下
lip=`LC_ALL=C ifconfig |grep -A 1 'eth0:' |awk 'NR==2{print}' | awk '{ print $2}'`
fi
#服务名称变量定义
servicename=`hostname | cut -d '.' -f 1`
#定义监控文件的远程存储服务器
rip=10.1.15.242
#移动历史监控文件至nmonhis目录
echo "移动历史监控文件至nmonhis目录"
mv /server/tomcat/logs/catalina.out.* ${monbase}/nmonhis;
cd ${monbase}
#mv *.nmon *.conf *.log ${monbase}/nmonhis;ls -lrt ${monbase}/nmonhis;ls -lrt
mv *.nmon *.conf *.log ${monbase}/nmonhis;
#可选监控项目通过替换"[ $? -eq 0 ]"为"[ $? -ne 0 ]"可屏蔽该监控项目
#开启dubbo日志监控(可选监控项目)
##定义dubbo日志
dubbolog=/home/tomcat/dubbo-governance.log
ps -ef |grep dubbo |grep -v grep
if [ $? -eq 0 ] && [ -f "${dubbolog}" ];
then
echo "${lip}开启dubbo日志监控:${lip}.dubbo.${monstart_time}.log"
tail -f ${dubbolog} > ${monbase}/${lip}.dubbo.${monstart_time}.log &
sleep 2
ps -ef |grep tail
else
sleep 1
fi
#开启nginx日志监控
##定义nginx日志
nginxlog=/home/devloper/work/nginx/logs/error.log
ps -ef |grep nginx |grep -v grep
if [ $? -eq 0 ] && [ -f "${nginxlog}" ];
then
echo "${lip}开启nginx日志监控:${lip}.nginxerror.${monstart_time}.log"
tail -f ${nginxlog} > ${monbase}/${lip}.nginxerror.${monstart_time}.log &
fi
#开启haproxy日志监控(可选监控项目)
##定义haproxy日志
halog=/var/log/haproxy.log
ps -ef |grep haproxy |grep -v grep
if [ $? -eq 0 ] && [ -f "${halog}" ];
then
echo "${lip}开启haproxy日志监控:${lip}.haproxy.${monstart_time}.log"
tail -f ${halog} > ${monbase}/${lip}.haproxy.${monstart_time}.log &
sleep 2
ps -ef |grep tail
else
sleep 1
fi
#开启tomcat日志监控
##定义tomcat日志
tomcatlog=/server/tomcat/logs/catalina.out
ps -ef|grep tomcat |grep -v grep
if [ $? -eq 0 ] && [ -f "${tomcatlog}" ];
then
echo "${lip}开启tomcat日志监控:${lip}.tomcat.${monstart_time}.log"
tail -f ${tomcatlog} > ${monbase}/${lip}.tomcat.${monstart_time}.log &
sleep 2
ps -ef |grep tail
else
sleep 1
fi
#开启trident日志监控
##定义sidekiq日志
#sidekiqlog=/home/devloper/work/trident/log/sidekiq.log
sidekiqlog=/data/log/rails/sidekiq.log
ps -ef |grep sidekiq |grep -v grep
if [ $? -eq 0 ] && [ -f "${sidekiqlog}" ];
then
echo "${lip}开启日志监控:${lip}.sidekiq.${monstart_time}.log"
tail -f ${sidekiqlog} > ${monbase}/${lip}.sidekiq.${monstart_time}.log &
sleep 2
ps -ef |grep tail
else sleep 1
fi
#开启redis日志监控(可选监控项目)
##定义redis日志
redislog=/server/redis/logs/redis.log
ps -fe|grep redis |grep -v grep
if [ $? -eq 0 ] && [ -f "${redislog}" ]
then
echo "${lip}开启redis日志监控:${lip}.redis.${monstart_time}.log"
tail -f ${redislog} >${monbase}/${lip}.redis.${monstart_time}.log &
sleep 2
ps -ef |grep tail
else sleep 1
fi
#开启rabbitmq日志监控(可选监控项目)
##定义rabbitmq日志
mqlog=/var/tmp/rabbitmq-tracing/RabbitMQ_Tracing.log
ps -fe|grep rabbitmq |grep -v grep
if [ $? -eq 0 ] && [ -f "${mqlog}" ];
then
echo "${lip}开启rabbitmq日志监控:${lip}.rabbitmq.${monstart_time}.log"
tail -f ${mqlog} > ${lip}.rabbitmq.${monstart_time}.log &
sleep 2
ps -ef |grep tail
else
sleep 1
fi
#开启mongo日志监控(可选监控项目)
##定义mongo日志
mongolog=/server/mongodb/logs/mongodb.log
ps -fe|grep mongod |grep -v grep
if [ $? -eq 0 ] && [ -f "${mongolog}" ]
then
echo "${lip}开启mongo日志监控:${lip}.mongo.${monstart_time}.log"
tail -f ${mongolog} > ${monbase}/${lip}.mongo.${monstart_time}.log &
sleep 2
ps -ef |grep tail
else
sleep 1
fi
##定义mysql日志
mysqlerr=/server/mysql/log/mysql.err.log
mysqlslow=/server/mysql/log/mysql.slow.log
mysqlerr_Trident=/server/mysql_data/mysql.err.log
mysqlslow_Trident=/server/mysql_data/mysql.slow.log
ps -fe|grep mysqld |grep -v grep
if [ $? -eq 0 ] && [ -f "${mysqlslow}" ]
then
echo "${lip}开启mysql日志监控:${lip}.mysql.${monstart_time}.log"
tail -f ${mysqlslow} > ${monbase}/${lip}.mysqlslow.${monstart_time}.log &
sleep 2
ps -ef |grep tail
else sleep 1
fi
ps -fe|grep mysqld |grep -v grep
if [ $? -eq 0 ] && [ -f "${mysqlslow_Trident}" ]
then
echo "${lip}开启mysql日志监控:${lip}.mysql.${monstart_time}.log"
tail -f ${mysqlslow_Trident} > ${monbase}/${lip}.mysqlslow.${monstart_time}.log &
sleep 2
ps -ef |grep tail
else sleep 1
fi
ps -fe|grep mysqld |grep -v grep
if [ $? -eq 0 ] && [ -f "${mysqlerr}" ]
then
echo "${lip}开启mysql日志监控:${lip}.mysql.${monstart_time}.log"
tail -f ${mysqlerr} > ${monbase}/${lip}.mysqlerr.${monstart_time}.log &
sleep 2
ps -ef |grep tail
else sleep 1
fi
ps -fe|grep mysqld |grep -v grep
if [ $? -eq 0 ] && [ -f "${mysqlerr_Trident}" ]
then
echo "${lip}开启mysql日志监控:${lip}.mysql.${monstart_time}.log"
tail -f ${mysqlerr_Trident} > ${monbase}/${lip}.mysqlerr.${monstart_time}.log &
sleep 2
ps -ef |grep tail
else sleep 1
fi
#开启vmstat监控
#echo "${lip}开启vmstat监控,监控启动时间:${monstart_time},监控间隔:${interval}秒,监控次数:${sum}次"
#df -hl >${lip}.vmstat.${monstart_time}.conf;sleep 1;vmstat ${interval} ${sum} >>${lip}.vmstat.${monstart_time}.conf &
#sleep 1
#开启iostat监控
#echo "${lip}开启iostat监控,监控启动时间:${monstart_time},监控间隔:${interval}秒,监控次数:${sum}次"
#iostat -x -k -d ${interval} ${sum} >>${lip}.iosat.${monstart_time}.conf &
#sleep 1
#开启vnstat监控
#echo "${lip}开启vnstat监控,监控启动时间:${monstart_time},监控间隔:${interval}秒,监控次数:${sum}次"
#vnstat -l -i eth0 >>${lip}.vnsat.${monstart_time}.conf &
#sleep 2
#开启dstat监控
echo "${lip}开启dstat监控,监控启动时间:${monstart_time},监控间隔:${interval}秒,监控次数:${sum}次"
dstat -tlcmsgnrp ${interval} ${sum} >>${lip}.dstat.${monstart_time}.dstat &
#开启nmon监控
echo "${lip}开启nmon监控,监控启动时间:${monstart_time},监控间隔:${interval}秒,监控次数:${sum}次"
${monbase}/nmon -F ${lip}.nmon.${monstart_time}.nmon -t -s $interval -c $sum &
sleep 2
ps -ef |grep nmon
sleep 1
let endtime=$interval*$sum+5
echo "${lip}监控进程已启动,监控场景将在${endtime}秒后结束,请耐心等待!"
sleep ${endtime}
#开始收集tomcat错误日志信息
if [ -f "${monbase}/${lip}.tomcat.${monstart_time}.log" ]; then
grep -A 30 'Exception\|ERROR\|Fail\|失败' ${monbase}/${lip}.tomcat.${monstart_time}.log > ${lip}.tomcat.${monstart_time}.error.log
fi
filename=${lip}.tomcat.${monstart_time}.error.log
filesize=`ls -l $filename | awk '{ print $5 }'`
minsize=0
if [ $filesize -eq $minsize ] ;then
rm -rf ${lip}.tomcat.${monstart_time}.error.log
fi
sleep 5
echo "${lip}监控已结束,删除后台未结束的监控进程"
ps -ef | grep tail | grep -v grep | awk '{print $2}' | xargs kill -9 &
ps -ef |grep "/server/nmondir/nmon" | grep -v grep | awk '{print $2}' | xargs kill -9 &
#ps -ef | grep vmstat | grep -v grep | awk '{print $2}' | xargs kill -9 &
#ps -ef | grep iostat | grep -v grep | awk '{print $2}' | xargs kill -9 &
#ps -ef | grep vnstat | grep -v grep | awk '{print $2}' | xargs kill -9 &
ps -ef | grep dstat | grep -v grep | awk '{print $2}' | xargs kill -9 &
sleep 3
echo "${lip}监控任务至此结束"
#开始传递监控文件到集中控制台
#scp ${monbase}/*.log *.conf *.nmon ${rip}:${monbase} &
#scp $lip.tomcat.${monstart_time}.log ${rip}:${monbase} &
#echo "$lip传递监控文件到集中控制台结束"
sleep 5
exit 0
命令演示:automon.sh 30 100 (30: #监控间隔时间 100:#监控总次数)
其实就是执行nmon的命令在脚本里有做了一次封装
脚本解读:
1、开启各服务及中间件的日志监控,如(tomcat、dubbo、mq、nginx、redis、mongo、mysqlslow、mysqlerr)
2、启动nmon
3、监控sleep到时间了,执行错误过滤日志
4、杀tail进程
#定义监控环境,监控频率,监控总次数
#定义要监控的环境
echo -e "请输入监控频率(数值,>=2秒),监控总次数(数值),监控环境(t/api),变量间用空格分开\n监控示例:2 5 api"
read interval sum envname
if [ ${interval} -lt 2 ];then
echo "输入的监控频率为${interval}<2秒,不合规"
exit 0
fi
#监控主目录
monbase=/server/nmondir
#tomcat主目录
tomcatbase=/server/tomcat/logs
#ip名称变量定义
ip=`LC_ALL=C ifconfig | grep 'inet addr:'| grep -v '127.0.0.1' |cut -d: -f2 | awk '{ print $1}'`
#移动历史监控文件至nmonhis目录
echo "移动历史监控文件至nmonhis目录"
cd ${monbase}
mv *.nmon *.conf *.log ${monbase}/nmonhis;ls -lrt ${monbase}/nmonhis;ls -lrt
if [ "${envname}" = "api" ]; then switch="api"
elif [ "${envname}" = "t" ]; then switch="t"
else switch="*"
fi
case $switch in
api)
echo "连接远程${envname}环境被监控服务器并开启监控脚本"
Host_List=" 10.100.24.10
10.100.24.11
10.100.24.12
10.100.24.13
10.100.24.133
10.100.24.14
10.100.24.15
10.100.24.16
10.100.24.17
10.100.24.18
10.100.24.19
10.100.24.73
10.100.24.74
10.100.24.75
10.100.24.76
10.100.24.77
10.100.24.78
10.100.24.79
10.100.24.8
10.100.24.80
10.100.24.81
10.100.24.82
10.100.24.83
10.100.24.84
10.100.24.85
10.100.24.86
10.100.24.87
10.100.24.88
10.100.24.89
10.100.24.9
10.103.51.63
10.103.51.71"
for Host in $Host_List
do
ssh root@$Host ${monbase}/servermon.sh ${interval} ${sum}&
ssh root@$Host ${monbase}/automon.sh ${interval} ${sum}&
done
;;
t)
echo "连接远程${envname}环境被监控服务器并开启监控脚本"
Host_List=" 10.103.51.101
10.103.51.102
10.103.51.137
10.103.51.138
10.103.51.139
10.103.51.156
10.103.51.164
10.103.51.174
10.103.51.217
10.103.51.221
10.103.51.222
10.103.51.225
10.103.51.229
10.103.51.230
10.103.51.231
10.103.51.232
10.103.51.233
10.103.51.234
10.103.51.235
10.103.51.237
10.103.51.26
10.103.51.43
10.103.51.52
10.103.51.54
10.103.51.55
10.103.51.56
10.103.51.57
10.103.51.58
10.103.51.59
10.103.51.60
10.103.51.61
10.103.51.62
10.103.51.63
10.103.51.64
10.103.51.65
10.103.51.66
10.103.51.67
10.103.51.68
10.103.51.69
10.103.51.70
10.103.51.71
10.103.51.72
10.103.51.73
10.103.51.74
10.103.51.75
10.103.51.76
10.103.51.77
10.103.51.79
10.103.51.80
10.103.51.88
10.103.51.89
10.103.51.90
10.103.51.91
10.103.51.92
10.103.51.93
10.103.51.94
10.103.51.95
10.103.51.96
10.103.51.97
10.103.51.98"
for Host in $Host_List
do
ssh $Host ${monbase}/servermon.sh ${interval} ${sum}&
ssh $Host ${monbase}/automon.sh ${interval} ${sum}&
done
;;
*)
echo "no switch can be matched!"
;;
esac
#done #多个传参的结束标识
#wait
echo "批量监控已结束"
exit 0
命令演示:automon.sh 30 100 test (30: #监控间隔时间 100:#监控总次数 test:切换环境)
脚本解读:将不同环境的执行服务列表遍历并远程执行automon.sh命令
#监控主目录
monbase=/server/nmondir
#定义监控启动时间
monstart_time=`date +%Y%m%d%H%M%S`
#ip名称变量定义
ip=`LC_ALL=C ifconfig | grep 'inet addr:'| grep -v '127.0.0.1' |cut -d: -f2 | awk '{ print $1}'`
#定义远程被监控服务器列表
#Hostlist=="192.168.206.88"
#定义远程存放文件的目录
dst=/server/nmondir
#echo -e "请输入批量操作的文件或模糊文件变量,多个变量用空格分开\n文件示例:nmon automon.sh servermon.sh\n>模糊文件示例:*.log *.nmon"
#read scpfiename
#动态传入多个被拷贝的文件名
count=1
while [ "$#" -ge "1" ];do
scpfilename=$1
echo "文件序号$count的文件名为:$1"
let count=count+1
shift
if [ "${scpfilename}" = "nmon" ] || [ "${scpfilename}" = "automon.sh" ] || [ "${scpfilename}" = "servermon.sh" ] || [ "${scpfilename}" = "rm.sh" ] ; then switch="1"
elif [ "${scpfilename}" = "*.log" ] || [ "${scpfilename}" = "*.nmon" ] || [ "${scpfilename}" = "*.conf" ] || [ "${scpfilename}" = "*.timelog" ] || [ "${scpfilename}" = "*servercollect*.log" ] || [ "${scpfilename}" = "*servermon*.log" ] || [ "${scpfilename}" = "*.dstat" ] || [ "${scpfilename}" = "*.servercollect" ] ; then switch="2"
else switch="*"
fi
case $switch in
1)
echo "批量拷贝本地${scpfilename}到远程被监控服务器"
Host_List=" 10.103.51.101
10.103.51.102
10.103.51.137
10.103.51.138
10.103.51.139
10.103.51.156
10.103.51.164
10.103.51.174
10.103.51.217
10.103.51.221
10.103.51.222
10.103.51.225
10.103.51.229
10.103.51.230
10.103.51.231
10.103.51.232
10.103.51.233
10.103.51.234
10.103.51.235
10.103.51.237
10.103.51.26
10.103.51.43
10.103.51.52
10.103.51.54
10.103.51.55
10.103.51.56
10.103.51.57
10.103.51.58
10.103.51.59
10.103.51.60
10.103.51.61
10.103.51.62
10.103.51.63
10.103.51.64
10.103.51.65
10.103.51.66
10.103.51.67
10.103.51.68
10.103.51.69
10.103.51.70
10.103.51.71
10.103.51.72
10.103.51.73
10.103.51.74
10.103.51.75
10.103.51.76
10.103.51.77
10.103.51.79
10.103.51.80
10.103.51.88
10.103.51.89
10.103.51.90
10.103.51.91
10.103.51.92
10.103.51.93
10.103.51.94
10.103.51.95
10.103.51.96
10.103.51.97
10.103.51.98"
for Host in $Host_List
do
scp -o GSSAPIAuthentication=no ${monbase}/${scpfilename} $Host:${dst}
done
echo "批量拷贝本地${scpfilename}到远程被监控服务器"
Host_List=" 10.100.24.10
10.100.24.11
10.100.24.12
10.100.24.13
10.100.24.133
10.100.24.14
10.100.24.15
10.100.24.16
10.100.24.17
10.100.24.18
10.100.24.19
10.100.24.73
10.100.24.74
10.100.24.75
10.100.24.76
10.100.24.77
10.100.24.78
10.100.24.79
10.100.24.8
10.100.24.80
10.100.24.81
10.100.24.82
10.100.24.83
10.100.24.84
10.100.24.85
10.100.24.86
10.100.24.87
10.100.24.88
10.100.24.89
10.100.24.9
10.103.51.63
10.103.51.71"
for Host in $Host_List
do
scp -o GSSAPIAuthentication=no ${monbase}/${scpfilename} $Host:${dst}
done
;;
2)
#定义要监控的环境
echo "输入要操作的环境英文简称:t(功能测试环境),api(api测试环境)"
read envname
if [ "${envname}" = "api" ]; then
echo "批量拷贝本地${envname}环境${scpfilename}到远程被监控服务器"
Host_List=" 10.100.24.10
10.100.24.11
10.100.24.12
10.100.24.13
10.100.24.133
10.100.24.14
10.100.24.15
10.100.24.16
10.100.24.17
10.100.24.18
10.100.24.19
10.100.24.73
10.100.24.74
10.100.24.75
10.100.24.76
10.100.24.77
10.100.24.78
10.100.24.79
10.100.24.8
10.100.24.80
10.100.24.81
10.100.24.82
10.100.24.83
10.100.24.84
10.100.24.85
10.100.24.86
10.100.24.87
10.100.24.88
10.100.24.89
10.100.24.9
10.103.51.63
10.103.51.71"
for Host in $Host_List
do
scp -o GSSAPIAuthentication=no $Host:${monbase}/${scpfilename} ${monbase}
done
elif [ "${envname}" = "t" ];then
echo "批量拷贝本地${envname}环境${scpfilename}到远程被监控服务器"
Host_List=" 10.103.51.101
10.103.51.102
10.103.51.137
10.103.51.138
10.103.51.139
10.103.51.156
10.103.51.164
10.103.51.174
10.103.51.217
10.103.51.221
10.103.51.222
10.103.51.225
10.103.51.229
10.103.51.230
10.103.51.231
10.103.51.232
10.103.51.233
10.103.51.234
10.103.51.235
10.103.51.237
10.103.51.26
10.103.51.43
10.103.51.52
10.103.51.54
10.103.51.55
10.103.51.56
10.103.51.57
10.103.51.58
10.103.51.59
10.103.51.60
10.103.51.61
10.103.51.62
10.103.51.63
10.103.51.64
10.103.51.65
10.103.51.66
10.103.51.67
10.103.51.68
10.103.51.69
10.103.51.70
10.103.51.71
10.103.51.72
10.103.51.73
10.103.51.74
10.103.51.75
10.103.51.76
10.103.51.77
10.103.51.79
10.103.51.80
10.103.51.88
10.103.51.89
10.103.51.90
10.103.51.91
10.103.51.92
10.103.51.93
10.103.51.94
10.103.51.95
10.103.51.96
10.103.51.97
10.103.51.98"
for Host in $Host_List
do
scp -o GSSAPIAuthentication=no $Host:${monbase}/${scpfilename} ${monbase}
done
else "other situation"
fi
;;
*)
echo "no switch can be matched!"
;;
esac
done #多个传参的结束标识
echo "批量拷贝已结束"
if [ $switch = "2" ];then
./getservercollect.sh ${envname}
fi
exit 0
脚本解读:将监控服务器下部署的监控信息,批量拷贝到远程机(批量到一个机器上,方便汇总监控资料)
#定义数据库连接参数
dbhost=10.100.24.15
dbuser=root
dbpwd=C8dM1B9wd1iQC7Y
dbname="ApolloConfigDB ApolloPortalDB bangbang_manage ersdata juanpi_manage mysql scdata test wowo_manage xiaocheng_manage xiaodai xiaodai_black xiaodai_manage xiaodai_market_manage xiaodai_portal xiaodai_r360 xiaodai_third xiaoxiaodai xxl-job xxl-job-182 yinbin-xiaodai-backup0622 yinbintest"
#定义开始时间戳
startime=`date +%Y%m%d%H%M%S`
#定义还原文件存放路径
filepath=/server/nmondir
echo -n "请指定还原所需sql文件('backupfile'):"
read backupfile
echo "所选还原文件为:$backupfile"
if [ `echo $backupfile | grep -e fullbackup` ];then switch=1
elif [ `echo $backupfile | grep -e partialbackup` ];then switch=2
else echo "所选还原文件不合规,请确认还原文件后再来运行该程序";exit 0
fi
case $switch in
1)
echo "开始从${backupfile}进行全量数据还原,请耐心等待\"还原已结束\"的提示字样出现"
sleep 2;
#echo "全量还原开始运行的临时测试信息"
#mysql -u${dbuser} -p${dbpwd} ${dbname} <${backupfile}
mysql -u${dbuser} -p${dbpwd} <${backupfile}
;;
2)
currentbaklist="
ast_current_account
ast_interest_invest_reocrd
ast_manually_lending
ast_maturity_platform_user_redeem
ast_out_config
ast_to_match
ast_to_match_detail
ast_user_account
ast_user_amount_match_queue
ast_user_in_out_matching
ast_user_out_apply
ast_user_out_interest
ast_user_out_matched_detail
ast_warehousing
curent_product_money_record
current_ast_maturity_platform_user_redeem
current_ast_user_out_apply
current_product
current_product_calendar
current_product_contract
current_product_desc
current_product_history_rate
current_product_pay_record
current_product_rate_info
current_product_statistics
current_user_account
current_user_income_invest_record
current_user_interest_record
current_user_invest_record
current_user_pay_record
current_user_redeem_record
money_record
user_account
loan
loan_asset
loan_base
loan_phase
bank_card"
cat /dev/null>${filepath}/table.CREATE.list
cat /dev/null>${filepath}/table.CREATE.txt
filterlist="CREATE"
for filter in $filterlist;
do
grep $filter ${filepath}/${backupfile}>${filepath}/table.${filter}.list
while read line;
do
echo $line|awk '{print $3}' >>${filepath}/table.${filter}.txt
done<${filepath}/table.${filter}.list
done
echo "列出将被还原的数据列表内容:"
cat ${filepath}/table.${filter}.txt
#echo "列出将被还原的数据列表:${currentbaklist}"
sleep 2;
echo -n "('judge'),确认还原输入1,我要放弃还原输入2:"
read judge
echo "您的输入为:$judge"
if [ $judge != "1" ];
then echo "您选择放弃还原,请确认还原后重新运行该程序 ";exit 0
fi
echo "开始从${backupfile}进行指定表的数据还原,请耐心等待\"还原已结束\"的提示字样出现"
sleep 2;
#echo "部分还原开始运行的临时测试信息"
mysql -uroot -psymdata xinda_product < ${backupfile}
;;
*)
echo "不符合规则,请阅读规则后再运行该程序";exit 0
;;
esac
#定义结束时间戳
endtime=`date +%Y%m%d%H%M%S`
echo "还原已结束,还原开始时间:${startime},还原结束时间:${endtime}";sleep 2;ls -lrt;exit 0
脚本解读:用于还原数据场景或重复执行准备的数据所使用
#监控主目录
monbase=/server/nmondir
#tomcat主目录
tomcatbase=/server/tomcat/logs
#tomcat启停服务目录
tomcatrestart=/server/tomcat/bin
#ip名称变量定义
lip=`LC_ALL=C ifconfig | grep 'inet addr:'| grep -v '127.0.0.1' |cut -d: -f2 | awk '{ print $1}'`
#定义替换文件位置
filedir1=/server/tomcat/webapps/ROOT/WEB-INF/classes/spring/
#定义替换文件
file1=springmvc.xml
#执行文件替换
echo "${lip} mv ${file1} ${file1}.bak"
cd ${filedir1}/;ls -lrt;sleep 1;mv ${file1} ${file1}.bak;ls -lrt;sleep 1;
echo "${lip} cp ${monbase}/${file1} ${filedir1}"
cp ${monbase}/${file1} ${filedir1};ls -lrt;sleep 1;
chmod 664 ${filedir1}/${file1}
echo "重启tomcat服务"
ps -fe|grep tomcat |grep -v grep
if [ $? -eq 0 ]
then
echo "${lip} tomcat process is still running,need to restart"
cd /etc/init.d;./tomcat restart;
sleep 2;echo "${lip} output the tomcat running status"
#tail -f /server/tomcat/logs/catalina.out |grep "Server startup in"
else
echo "tomcat are not running,need to start"
cd /etc/init.d;./tomcat start;
sleep 1;echo "${lip} output the tomcat running status";
#tail -f /server/tomcat/logs/catalina.out |grep "Server startup in"
fi
exit 0
#ps -fe|grep tomcat |grep -v grep
#if [ $? -eq 0 ]
# then
# echo "${lip} tomcat process is still running,need to kill"
# ps -ef | grep tomcat | grep -v grep | awk '{print $2}' | xargs kill -9;sleep 3
# echo "${lip} output tomcat process after killing "
# ps -ef | grep tomcat;sleep 2
# echo "${lip} stop process end"
# sh ${tomcatrestart}/startup.sh
# echo "${lip} output tomcat process after killing and running startup.sh"
# sleep 1;echo "ps -ef |grep tomcat"
# ps -ef |grep tomcat
# sleep 2;echo "${lip} output the tomcat running status"
#else
# echo "tomcat are not running,need to start"
# sh ${tomcatrestart}/startup.sh;
# echo "${lip} output tomcat process after running startup.sh";
# sleep 2;echo "ps -ef |grep tomcat";
# ps -ef |grep tomcat;
# sleep 1;echo "${lip} output the tomcat running status";
#fi
#exit 0
脚本解读:运行压测前自动替换java代码或者配置文件,如替换万能验证码代码或者xml等配置文件
好处:开发可以配合压测修改指定的代码,并且不会出现将配合改的代码误发到线上的风险