shell的应用实例

用户建立脚本

 

执行users_create.sh userfile passfile

建立userfile列表中的用户

设定userfile列表中的密码为passfile列表中的密码

当脚本后面跟的文件个数不足两时,报错

当文件行数不一致时报错

当文件不存在时报错

当用户存在时报错

 

 

11 #!/bin/bash

 12

 13 n=`awk 'BEGIN{N=0}{N++}END{print N}' /ost/userfile`

 14 for Num in `seq 1 "$n"`

 15 do

 16          User_name=`sed -n ""$Num"p" /ost/userfile`

 17          Passwd=`sed -n ""$Num"p" /ost/passfile`

 18          useradd $User_name

 19          echo $Passwd | passwd --stdin $User_name

 20 done

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

服务自动部署示例

执行脚本lamp.sh

脚本执行后部署好论坛,并设定apache的网络接口为8080

改变httpd的接口

11 #!/bin/bash

 12 [ -z $1 ] && {

 13         echo please give a num for port

 14         exit 1

 15 }

 16

 17 sed "/^Listen/cListen\ $1" -i /etc/httpd/conf/httpd.conf

 18 systemctl restart httpd

 

 

 

 

 

 

 

 

 

 

 

 数据库备份

执行db_dump.sh westos(数据库密码)

脚本执行后会备份数据库中的所有库到/mnt/mysqldump目录中

备份文件名称为 “库名称.sql”当此文件存在时报错并询问动作

输入“S”跳过备份,当输入“B"时备份“库名称.sql”文件为“库名称_backup.sql”,当输入“O”时,覆盖源文件

 11 #!/bin/bash

 12 ACTION_CMD(){

 13         read -p "

 14         [S]kip [B]ackup [O]verwrite

 15         please input action: "ACTION

 16         ACTION=`echo $ACTION|tr 'A-Z' 'a-z'`

 17         case $ACTION in

 18                  s)

 19                  ;;

 20                  b)

 21                 echo backup

 22                  ;;

 23                  o)

 24                 echo overwrite

 25                  ;;

 26                 exit)

 27                 echo bye

 28                 exit 0

 29                  ;;

 30                  *)

 31                 echo "error: please input [S] [B] [O]"

 32                 ACTION_CMD $1

 33        esac

 34 }

 35 for DATABASE in hello westos linux

 36 do

 37        [ -e "/sing777/backup/$DATABASE" ] && {

 38          ACTION_CMD

 39        }||{

 40

 41         }

 

 

 

 

 

 

 

 

 

 

 

 

 

 

自动登陆脚本

执行auto_ssh.sh 172.25.254.177 redhat

172.25.254.177为ip

redhat 为密码

执行脚本后自动登陆172.25.254.177并保持登陆

 11 #!/bin/bash

 12 Auto_Connect()

 13 {

 14 /usr/bin/expect <<EOF |grep -E "authenticity|ECDSA|connecting|Warning|spawn|password" -v

 15 set timeout 5

 16 spawn ssh root@172.25.254.$NUM "$1"

 17 expect {

 18         "yes/no" { send "yes\r";exp_continue }

 19         "password:" { send "redhat\r" }

 20 }

 21 expect eof

 22 EOF

 23 }

 24 for NUM in 177

 25 do

 26         ping -c1 -w1 172.25.254.$NUM &> /dev/null &&{

 27         Max_Line=`awk 'BEGIN{N=0}{N++}END{print N}' $1`

 28         for Line_Num in `seq 1 $Max_line`

 29         do

 30                 USERNAME=`sed -n ${Line_Num}p $1`

 31                 PASSWORD=`sed -n ${Line_Num}p $2`

 32                 User_check=`Auto_Connect "useradd $USERNAME"`

 33                 [ -n "$User_check" ]&&{

 34                 echo $User_check

 35                 }||{

 36                 Auto_Connetion "echo $PASSWORD | passwd --stdin $USERNAME"

 37         done

 38 }|| echo 172.25.254.$NUM is down

 40 done


Shell脚本高级编程教程,希望对你有所帮助。 Example 10-23. Using continue N in an actual task: 1 # Albert Reiner gives an example of how to use "continue N": 2 # --------------------------------------------------------- 3 4 # Suppose I have a large number of jobs that need to be run, with 5 #+ any data that is to be treated in files of a given name pattern in a 6 #+ directory. There are several machines that access this directory, and 7 #+ I want to distribute the work over these different boxen. Then I 8 #+ usually nohup something like the following on every box: 9 10 while true 11 do 12 for n in .iso.* 13 do 14 [ "$n" = ".iso.opts" ] && continue 15 beta=${n#.iso.} 16 [ -r .Iso.$beta ] && continue 17 [ -r .lock.$beta ] && sleep 10 && continue 18 lockfile -r0 .lock.$beta || continue 19 echo -n "$beta: " `date` 20 run-isotherm $beta 21 date 22 ls -alF .Iso.$beta 23 [ -r .Iso.$beta ] && rm -f .lock.$beta 24 continue 2 25 done 26 break 27 done 28 29 # The details, in particular the sleep N, are particular to my 30 #+ application, but the general pattern is: 31 32 while true 33 do 34 for job in {pattern} 35 do 36 {job already done or running} && continue 37 {mark job as running, do job, mark job as done} 38 continue 2 39 done 40 break # Or something like `sleep 600' to avoid termination. 41 done 42 43 # This way the script will stop only when there are no more jobs to do 44 #+ (including jobs that were added during runtime). Through the use 45 #+ of appropriate lockfiles it can be run on several machines 46 #+ concurrently without duplication of calculations [which run a couple 47 #+ of hours in my case, so I really want to avoid this]. Also, as search 48 #+ always starts again from the beginning, one can encode priorities in 49 #+ the file names. Of course, one could also do this without `continue 2', 50 #+ but then one would have to actually check whether or not some job 51 #+ was done (so that we should immediately look for the next job) or not 52 #+ (in which case we terminate or sleep for a long time before checking 53 #+ for a new job).
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值