python runner.daemonrunner_在python中将参数传递给daemon runner

为了保持守护进程的运行,防止它消耗所有资源,避免让僵尸进程困扰您的系统,需要进行大量的簿记工作,等等

下面是我们运行的最简单守护进程的简化版本(它使许多工作进程保持运行和循环)。它呼唤着commandq.run_命令()做实际工作(不包括在内)。在

如果你能用一个更简单的cron作业(你需要一个cron作业或类似的作业来验证守护进程是否正在运行)。在import os, sys, time, argparse, random, signal

import multiprocessing

import psutil

# originally from http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/

# now at https://gist.github.com/dcai/1075904/6f7be00f7f411d5c2e7cd1691dcbb68efacb789c

import daemon

def _ensure_dir(*pth):

path = os.path.join(*pth)

if os.path.exists(path):

if not os.path.isdir(path):

raise RuntimeError("%r is not a directory!" % path)

else:

os.makedirs(path, 0775)

return path

PID_DIRECTORY = _ensure_dir('/var/run/commandq/')

PID_FNAME = 'commandq-worker.pid'

PID_FILE = os.path.join(PID_DIRECTORY, PID_FNAME)

def worker(args, parentpid):

"""Command Queue worker process.

"""

# number of tasks to process before dying (we can't just keep looping

# in case the client code has resource leaks..)

recycle = args.recycle

def sleep_or_die(n=0):

"""If our parent died (or got killed), we commit suicide.

(this is much easier than trying to kill sub-treads from the

parent).

"""

# os.getppid() only exists on Linux..

if os.getppid() != parentpid: # i.e. parent died

sys.exit()

# back off if the system is busy (i.e. don't cause a death spiral..)

if psutil.cpu_percent() > 70.0:

time.sleep(25)

if os.getppid() != parentpid: # check that parent didn't die

sys.exit()

if n > 0:

time.sleep(n)

while recycle:

sleep_or_die() # don't take all cpu-resources

try:

# WORK: pulls a unit of work and executes it

# - raises NoWork if work queue is empty

# - raises LockException if too much lock

# contention (i.e. timeout waiting for lock)

commandq.run_command()

except commandq.NoWork as e:

# introduce randomness to prevent "harmonics"

sleeptime = random.randrange(1, 5)

sleep_or_die(sleeptime)

except commandq.LockException as e:

# too much lock contention... back off a little.

sleep_or_die(random.randrange(3, 10))

recycle -= 1

def start_workers(count, args):

"Start ``count`` number of worker processes."

procs = [multiprocessing.Process(target=worker, args=(args, os.getpid()))

for _i in range(count)]

_t = [t.start() for t in procs]

return procs

def main(daemon):

"Daemon entry point."

args = daemon.args

procs = start_workers(args.count, args) # start args.count workers

while 1:

# active_children() joins finished processes

procs = multiprocessing.active_children()

missing = args.count - len(procs)

if missing:

# if any of our workers died, start replacement processes

procs += start_workers(missing, args)

time.sleep(5)

if os.getpid() != daemon.getpid():

# a second copy has started, i.e. we should stop running

return

#[t.join() for t in procs] # subprocesses will die when they discover that we died.

class WorkerDaemon(daemon.Daemon):

"Worker daemon."

args = None

def run(self):

"main() does all the work."

def sigint_handler(signum, frame):

self.delpid()

signal.signal(signum, signal.SIG_DFL)

# re-throw signal (this time without catching it).

os.kill(os.getpid(), signum)

# make sure we don't leave sub-processes as zombies when someone

# kills the deamon process.

signal.signal(signal.SIGINT, sigint_handler)

signal.signal(signal.SIGTERM, sigint_handler)

main(self)

if __name__ == "__main__":

cpucount = multiprocessing.cpu_count()

parser = argparse.ArgumentParser(description='Command Queue worker.')

parser.add_argument(

'-n', dest='count', type=int, default=cpucount,

help='number of worker processes (defaults to number of processors).')

parser.add_argument('action', nargs='?',

help='stop|restart|status of the worker deamon.')

# when we move to 2.7 we can use the maxtaskperchild argument to

# multiprocessing.Pool

parser.add_argument(

' recycle', dest='recycle', type=int, default=400,

help='number of iterations before recycling the worker thread.')

_args = parser.parse_args()

daemon = WorkerDaemon(PID_FILE)

daemon.args = _args

if _args.action == 'status':

if daemon.status():

print "commandq worker is running."

else:

print "commandq worker is NOT running."

sys.exit(0)

elif _args.action == 'stop':

daemon.stop()

elif _args.action == 'restart':

daemon.restart()

else:

daemon.start()

sys.exit(0)

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
该资源内项目源码是个人的课程设计、毕业设计,代码都测试ok,都是运行成功后才上传资源,答辩评审平均分达到96分,放心下载使用! ## 项目备注 1、该资源内项目代码都经过测试运行成功,功能ok的情况下才上传的,请放心下载使用! 2、本项目适合计算机相关专业(如计科、人工智能、通信工程、自动化、电子信息等)的在校学生、老师或者企业员工下载学习,也适合小白学习进阶,当然也可作为毕设项目、课程设计、作业、项目初期立项演示等。 3、如果基础还行,也可在此代码基础上进行修改,以实现其他功能,也可用于毕设、课设、作业等。 下载后请首先打开README.md文件(如有),仅供学习参考, 切勿用于商业用途。 该资源内项目源码是个人的课程设计,代码都测试ok,都是运行成功后才上传资源,答辩评审平均分达到96分,放心下载使用! ## 项目备注 1、该资源内项目代码都经过测试运行成功,功能ok的情况下才上传的,请放心下载使用! 2、本项目适合计算机相关专业(如计科、人工智能、通信工程、自动化、电子信息等)的在校学生、老师或者企业员工下载学习,也适合小白学习进阶,当然也可作为毕设项目、课程设计、作业、项目初期立项演示等。 3、如果基础还行,也可在此代码基础上进行修改,以实现其他功能,也可用于毕设、课设、作业等。 下载后请首先打开README.md文件(如有),仅供学习参考, 切勿用于商业用途。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值