shell脚本案例

1:shell使用

1).开头加解释器:#!/bin/bash
2).语法缩进,使用四个空格,多加注释说明
3).命名建议规范:变量名大写,局部变量小写,函数名小写
4).默认变量是全局的,函数中使用local指定局部变量
5).调试脚本两个命令:set -e遇到执行非0时退出脚本,set -x打印执行过程
6).一定要先测试,在加入到工程中。
7).增加文件执行权限。在执行测试,

2:shell案例一

定时清空文件内容,定时记录文件大小
https://www.jb51.net/article/167723.htm

#!/bin/bash 
################################################################ 
#每小时执行一次脚本(任务计划),当时间为0点或12点时,将目标目录下的所有文件内 
#容清空,但不删除文件,其他时间则只统计各个文件的打小,一个文件一行,输出到以时#间和日期命名的文件中,需要考虑目标目录下二级、三级等子目录的文件 
################################################################ 
logfile=/tmp/`date +%H-%F`.log 
n=`date +%H` 
if [ $n -eq 00 ] || [ $n -eq 12 ] 
 then 
 #通过for循环,以find命令作为遍历条件,将目标目录下的所有文件进行遍历并做相应操作 
 for i in `find /data/log/ -type f` 
 do 
 true > $i 
 done 
 else 
 for i in `find /data/log/ -type f` 
 do 
 du -sh $i >> $logfile 
 done 
fi 

3:案例二

实现mysql数据库进行分库加分表备份
https://blog.51cto.com/13520779/2093146

#!/bin/bash
Mycmd="mysql -uroot -p1234"
Mydump="mysqldump ${Mycmd} --all-databases "
Dblist=`${Mycmd} -e "show databases;"|sed '1,2d' |egrep -v  "_schema|mysql"`
[ -d /tem/mysql_bak/ ] || mkdir -p /tmp/mysql_bak/
for database in ${Dblist}
do 
   Tablist=`${Mycmd} -e "show tables from ${database};"|sed 1d`
    for table in ${Tablist} 
    do 
    mkdir -p /tmp/mysql_bak/${database}
    ${Mycmd} ${database} ${table}|gzip >/tmp/mysql_bak/${database}/${table}_s(data+%F).sql.gz
    done
    echo -e "\033[32m mysqldump ${database} is ok!!! \033[0m"
done

4:shell基础

$? 上一次执行的结果 输出0表示true,不是0则为false
chmod 777 file 文件增加授权
$0,$1传入参数
${a} 引用变量
text >file 替换
text >> file 追加(输入) <<输出重定向
source file 引入文件内容
\n 转义换行 \c转义不换行
du 显示文件大小

`expr 1 + 1`   运算
函数和流程控制 https://www.runoob.com/linux/linux-tutorial.html
  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Shell脚本高级编程教程,希望对你有所帮助。 Example 10-23. Using continue N in an actual task: 1 # Albert Reiner gives an example of how to use "continue N": 2 # --------------------------------------------------------- 3 4 # Suppose I have a large number of jobs that need to be run, with 5 #+ any data that is to be treated in files of a given name pattern in a 6 #+ directory. There are several machines that access this directory, and 7 #+ I want to distribute the work over these different boxen. Then I 8 #+ usually nohup something like the following on every box: 9 10 while true 11 do 12 for n in .iso.* 13 do 14 [ "$n" = ".iso.opts" ] && continue 15 beta=${n#.iso.} 16 [ -r .Iso.$beta ] && continue 17 [ -r .lock.$beta ] && sleep 10 && continue 18 lockfile -r0 .lock.$beta || continue 19 echo -n "$beta: " `date` 20 run-isotherm $beta 21 date 22 ls -alF .Iso.$beta 23 [ -r .Iso.$beta ] && rm -f .lock.$beta 24 continue 2 25 done 26 break 27 done 28 29 # The details, in particular the sleep N, are particular to my 30 #+ application, but the general pattern is: 31 32 while true 33 do 34 for job in {pattern} 35 do 36 {job already done or running} && continue 37 {mark job as running, do job, mark job as done} 38 continue 2 39 done 40 break # Or something like `sleep 600' to avoid termination. 41 done 42 43 # This way the script will stop only when there are no more jobs to do 44 #+ (including jobs that were added during runtime). Through the use 45 #+ of appropriate lockfiles it can be run on several machines 46 #+ concurrently without duplication of calculations [which run a couple 47 #+ of hours in my case, so I really want to avoid this]. Also, as search 48 #+ always starts again from the beginning, one can encode priorities in 49 #+ the file names. Of course, one could also do this without `continue 2', 50 #+ but then one would have to actually check whether or not some job 51 #+ was done (so that we should immediately look for the next job) or not 52 #+ (in which case we terminate or sleep for a long time before checking 53 #+ for a new job).
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值