gprecoverseg代码解析

一、全量拉取函数调用流程

gprecoverseg

gprecoverseg只是一个入口,调用情况如下:

# gprecoverseg

from gppylib.mainUtils import simple_main
from gppylib.programs.clsRecoverSegment import GpRecoverSegmentProgram

if __name__ == '__main__':
    simple_main(GpRecoverSegmentProgram.createParser,
                GpRecoverSegmentProgram.createProgram,
                GpRecoverSegmentProgram.mainOptions())

mainUtils.py

下一个调用处在mainUtils.py中约341行:

# mainUtils.py

-------------------
try:
    execname = getProgramName()
    hostname = unix.getLocalHostname()
    username = unix.getUserName()

    parser = createOptionParserFn()
    (options, args) = parser.parse_args()

    if useHelperToolLogging:
        gplog.setup_helper_tool_logging(execname, hostname, username)
    else:
        gplog.setup_tool_logging(execname, hostname, username,
                                 logdir=options.ensure_value("logfileDirectory", None), nonuser=nonuser)

    if forceQuiet:
        gplog.quiet_stdout_logging()
    else:
        if options.ensure_value("verbose", False):
            gplog.enable_verbose_logging()
        if options.ensure_value("quiet", False):
            gplog.quiet_stdout_logging()

    if options.ensure_value("masterDataDirectory", None) is not None:
        options.master_data_directory = os.path.abspath(options.masterDataDirectory)

    if not suppressStartupLogMessage:
        logger.info("Starting %s with args: %s" % (gProgramName, ' '.join(sys.argv[1:])))

    commandObject = createCommandFn(options, args)
    exitCode = commandObject.run()      # 实际调用点,返回修复是否成功的状态码
    exit_status = exitCode
-------------------

clsRecoverSegment.py

上一步run方法所在的对象是clsRecoverSegment.py中的GpRecoverSegmentProgram,触发修复的部分代码如下:

# clsRecoverSegment.py

----------------------
# 触发全量修复的代码
# sync packages
current_hosts = set(gpArray.getHostList())
new_hosts = current_hosts - existing_hosts
if new_hosts:
    self.syncPackages(new_hosts)

contentsToUpdate = [seg.getLiveSegment().getSegmentContentId() for seg in mirrorBuilder.getMirrorsToBuild()]
update_pg_hba_on_segments(gpArray, self.__options.hba_hostnames, self.__options.parallelDegree, contentsToUpdate)
if not mirrorBuilder.buildMirrors("recover", gpEnv, gpArray):            # <---执行buildMirrorSegments.py中的buildMirrors方法,进行触发修复
    sys.exit(1)

confProvider.sendPgElogFromMaster("Recovery of %d segment(s) has been started." % \
                                  len(mirrorBuilder.getMirrorsToBuild()), True)

self.trigger_fts_probe(port=gpEnv.getMasterPort())

self.logger.info("********************************")
self.logger.info("Segments successfully recovered.")
self.logger.info("********************************")
----------------------

# 等待操作完成后,再触发后续的FTS探活与启动等操作

buildMirrorSegments.py

buildMirrors方法中调用全量修复的代码如下:

# buildMirrorSegments.py

------------------------
self.__ensureStopped(gpEnv, toStopDirectives)
self.__ensureSharedMemCleaned(gpEnv, toStopDirectives)
self.__ensureMarkedDown(gpEnv, toEnsureMarkedDown)
if not self.__forceoverwrite:
    self.__cleanUpSegmentDirectories(cleanupDirectives)    # <----修复前清空数据目录
self.__copySegmentDirectories(gpEnv, gpArray, copyDirectives)    # <----触发全量修复
-------------------------

self.__copySegmentDirectories函数如下,实际触发修复的函数是__runWaitAndCheckWorkerPoolForErrorsAndClear:

# buildMirrorSegments.py

------------------------
def __copySegmentDirectories(self, gpEnv, gpArray, directives):
    """
    directives should be composed of GpCopySegmentDirectoryDirective values
    """
    # directives中存储了需要进行修复的segment相关的信息
    if len(directives) == 0:
        return

    srcSegments = []                # 储存源节点的信息列表,即待修复的节点所对应的节点
    destSegments = []               # 储存待修复的节点的信息列表
    isTargetReusedLocation = []
    timeStamp = datetime.datetime.today().strftime('%Y%m%d_%H%M%S')
    for directive in directives:
        srcSegment = directive.getSrcSegment()
        destSegment = directive.getDestSegment()
        destSegment.primaryHostname = srcSegment.getSegmentHostName()
        destSegment.primarySegmentPort = srcSegment.getSegmentPort()
        destSegment.progressFile = '%s/pg_basebackup.%s.dbid%s.out' % (gplog.get_logger_dir(),
                                                                       timeStamp,
                                                                       destSegment.getSegmentDbId())
        srcSegments.append(srcSegment)
        destSegments.append(destSegment)
        isTargetReusedLocation.append(directive.isTargetReusedLocation())  # 为True的话,表示该路径已经被gpcleansegmentdir.py清理

    destSegmentByHost = GpArray.getSegmentsByHostName(destSegments)     # 字典结构,key为需要修复的segment所在的hostname,value是待修复的segment信息
    newSegmentInfo = gp.ConfigureNewSegment.buildSegmentInfoForNewSegment(destSegments, isTargetReusedLocation)     #生成新的Segment信息,该信息可用于获取要传递给ConfigureNewSegment的confinfo参数,包括hostname,数据目录,端口,out文件输出位置等
    
    # 配置新的segment的函数,主要是返回gp.ConfigureNewSegment对象
    def createConfigureNewSegmentCommand(hostName, cmdLabel, validationOnly):
        segmentInfo = newSegmentInfo[hostName]
        checkNotNone("segmentInfo for %s" % hostName, segmentInfo)

        return gp.ConfigureNewSegment(cmdLabel,
                                      segmentInfo,
                                      gplog.get_logger_dir(),
                                      newSegments=True,
                                      verbose=gplog.logging_is_verbose(),
                                      batchSize=self.__parallelPerHost,
                                      ctxt=gp.REMOTE,
                                      remoteHost=hostName,
                                      validationOnly=validationOnly,
                                      forceoverwrite=self.__forceoverwrite)
    #
    # validate directories for target segments
    #
    self.__logger.info('Validating remote directories')
    cmds = []
    for hostName in destSegmentByHost.keys():
        cmds.append(createConfigureNewSegmentCommand(hostName, 'validate blank segments', True))
    for cmd in cmds:
        self.__pool.addCommand(cmd)

    if self.__quiet:
        self.__pool.join()
    else:
        base.join_and_indicate_progress(self.__pool)

    validationErrors = []
    for item in self.__pool.getCompletedItems():
        results = item.get_results()
        if not results.wasSuccessful():
            if results.rc == 1:
                # stdoutFromFailure = results.stdout.replace("\n", " ").strip()
                lines = results.stderr.split("\n")
                for line in lines:
                    if len(line.strip()) > 0:
                        validationErrors.append("Validation failure on host %s %s" % (item.remoteHost, line))
            else:
                validationErrors.append(str(item))
    self.__pool.empty_completed_items()
    if validationErrors:
        raise ExceptionNoStackTraceNeeded("\n" + ("\n".join(validationErrors)))

    # Configure a new segment
    #
    # Recover segments using gpconfigurenewsegment, which
    # uses pg_basebackup. gprecoverseg generates a log filename which is
    # passed to gpconfigurenewsegment as a confinfo parameter. gprecoverseg
    # tails this file to show recovery progress to the user, and removes the
    # file when one done. A new file is generated for each run of
    # gprecoverseg based on a timestamp.
    self.__logger.info('Configuring new segments')
    cmds = []
    progressCmds = []
    removeCmds= []
    for hostName in destSegmentByHost.keys():
        for segment in destSegmentByHost[hostName]:
            progressCmd, removeCmd = self.__getProgressAndRemoveCmds(segment.progressFile,
                                                                     segment.getSegmentDbId(),
                                                                     hostName)
            removeCmds.append(removeCmd)
            if progressCmd:
                progressCmds.append(progressCmd)

        # cmds长度为待修复的host节点数量,包含了调用pg_basebackup的gppylib.commands.gp.ConfigureNewSegment类,
        # 该类中拼接了gpconfigurenewsegment,而gpconfigurenewsegment中拼接了pg_basebackup命令
        cmds.append(
            createConfigureNewSegmentCommand(hostName, 'configure blank segments', False))

    # 调用pg_basebackup进行全量拉取的函数
    self.__runWaitAndCheckWorkerPoolForErrorsAndClear(cmds, "unpacking basic segment directory",
                                                      suppressErrorCheck=False,
                                                      progressCmds=progressCmds)

    self.__runWaitAndCheckWorkerPoolForErrorsAndClear(removeCmds, "removing pg_basebackup progres logfiles",
                                                      suppressErrorCheck=False)

    #
    # copy dump files from old segment to new segment
    #
    for srcSeg in srcSegments:
        for destSeg in destSegments:
            if srcSeg.content == destSeg.content:
                src_dump_dir = os.path.join(srcSeg.getSegmentDataDirectory(), 'db_dumps')
                cmd = base.Command('check existence of db_dumps directory', 'ls %s' % (src_dump_dir),
                                   ctxt=base.REMOTE, remoteHost=destSeg.getSegmentAddress())
                cmd.run()
                if cmd.results.rc == 0:  # Only try to copy directory if it exists
                    cmd = Scp('copy db_dumps from old segment to new segment',
                              os.path.join(srcSeg.getSegmentDataDirectory(), 'db_dumps*', '*'),
                              os.path.join(destSeg.getSegmentDataDirectory(), 'db_dumps'),
                              srcSeg.getSegmentAddress(),
                              destSeg.getSegmentAddress(),
                              recursive=True)
                    cmd.run(validateAfter=True)
                    break

------------------------

__runWaitAndCheckWorkerPoolForErrorsAndClear的代码如下:

# buildMirrorSegments.py

----------------------------
def __runWaitAndCheckWorkerPoolForErrorsAndClear(self, cmds, actionVerb, suppressErrorCheck=False,
                                                 progressCmds=[]):

    for cmd in cmds:
        self.__pool.addCommand(cmd) # 加入线程队列后就开始拉取数据,主要是跑pg_basebackup

    if self.__quiet:
        self.__pool.join()
    elif progressCmds:
        self._join_and_show_segment_progress(progressCmds,
                                             inplace=self.__progressMode == GpMirrorListToBuild.Progress.INPLACE)
    else:
        base.join_and_indicate_progress(self.__pool)

    if not suppressErrorCheck:
        self.__pool.check_results()

    completedRecoveryCmds = list(set(self.__pool.getCompletedItems()) & set(cmds))

    self.__pool.empty_completed_items()

    return completedRecoveryCmds
----------------------------

gp.py

__runWaitAndCheckWorkerPoolForErrorsAndClear触发修复时,传入的cmds参数是gp.py中的ConfigureNewSegment对象,该对象通常用于配置一个新的segment,在gpexpand、gpaddmirrors,、gprecoverseg(全量修复)中会调用。该对象拼接了gpconfigurenewsegment的命令,部分代码如下:

# gp.py

----------------------------
class ConfigureNewSegment(Command):
    """
    Configure a new segment, usually from a template, as is done during gpexpand, gpaddmirrors, gprecoverseg (full),
      etc.
    """

    def __init__(self, name, confinfo, logdir, newSegments=False, tarFile=None,
                 batchSize=None, verbose=False,ctxt=LOCAL, remoteHost=None, validationOnly=False, writeGpIdFileOnly=False,
                 forceoverwrite=False):

        cmdStr = '$GPHOME/bin/lib/gpconfigurenewsegment -c \"%s\" -l %s' % (confinfo, pipes.quote(logdir))

        if newSegments:
            cmdStr += ' -n'
        if tarFile:
            cmdStr += ' -t %s' % tarFile
        if verbose:
            cmdStr += ' -v '
        if batchSize:
            cmdStr += ' -b %s' % batchSize
        if validationOnly:
            cmdStr += " --validation-only"
        if writeGpIdFileOnly:
            cmdStr += " --write-gpid-file-only"
        if forceoverwrite:
            cmdStr += " --force-overwrite"

        Command.__init__(self, name, cmdStr, ctxt, remoteHost)
----------------------------

gpconfigurenewsegment

gpconfigurenewsegment主要用于在一个已经存在的GPDB集群中配置segment目录,至少会被gpexpand、gprecoverseg和gpaddmirrors调用。该脚本运行的主要流程如下:

# gpconfigurenewsegment

----------------------------
try:
    (options, args, seg_info) = parseargs()

    logger = gplog.setup_tool_logging(EXECNAME, unix.getLocalHostname(), unix.getUserName(),
                                      logdir=options.logfileDirectory)

    if options.verbose:
        gplog.enable_verbose_logging()

    logger.info("Starting gpconfigurenewsegment with args: %s" % ' '.join(sys.argv[1:]))

    # 生成线程队列
    pool = WorkerPool(numWorkers=options.batch_size)

    for seg in seg_info:
        dataDir = seg[0]
        port = seg[1]
        isPrimary = seg[2] == "true"
        directoryValidationLevel = seg[3] == "true"
        dbid = int(seg[4])
        contentid = int(seg[5])

        # These variables should not be used if it's a primary
        # they will be junk data passed through the config.
        primaryHostName = seg[6]
        primarySegmentPort = int(seg[7])
        progressFile = seg[8]
        
        
        # ConfExpSegCmd对象中包含了修复的命令,包括检查/生成路径,调用pg_basebackup
        cmd = ConfExpSegCmd( name = 'Configure segment directory'
                           , cmdstr = ' '.join(sys.argv)
                           , datadir = dataDir
                           , port = port
                           , dbid = dbid
                           , contentid = contentid
                           , newseg = options.newsegments
                           , tarfile = options.tarfile
                           , useLighterDirectoryValidation = directoryValidationLevel
                           , isPrimary = isPrimary
                           , syncWithSegmentHostname = primaryHostName
                           , syncWithSegmentPort = primarySegmentPort
                           , verbose = options.verbose
                           , validationOnly = options.validationOnly
                           , forceoverwrite = options.forceoverwrite
                           , replicationSlotName = options.replicationSlotName
                           , logfileDirectory = options.logfileDirectory
                           , progressFile= progressFile
                           )
        # 将该类加入的线程队列中,立即执行
        pool.addCommand(cmd)
    
    # 阻塞,等待线程队列为空
    pool.join()

    if options.validationOnly:
        errors = []
        for item in pool.getCompletedItems():
            if item.get_results().rc != 0:
                errors.append(str(item.get_results().stderr).replace("\n", " "))

        if errors:
            print >> sys.stderr, "\n".join(errors)
            sys.exit(1)
        else: sys.exit(0)
    else:
        try:
            pool.check_results()
        except Exception, e:
            if options.verbose:
                logger.exception(e)
            logger.error(e)
            print >> sys.stderr, e
            sys.exit(1)

    sys.exit(0)

----------------------------

ConfExpSegCmd对象是用来检查数据路径,当不存在时创建目录并赋权0700,并调用PgBaseBackup用于生成pg_basebackup的实际命令,通过run方法触发动作,部分代码如下:

# gpconfigurenewsegment

----------------------------
if not self.isPrimary:
    # If the caller does not specify a pg_basebackup
    # progress file, then create a temporary one and handle
    # its deletion upon success.
    shouldDeleteProgressFile = False
    if not self.progressFile:                     # <---生成pg_basebackup日志文件路径
        shouldDeleteProgressFile = True
        self.progressFile = '%s/pg_basebackup.%s.dbid%s.out' % (gplog.get_logger_dir(),
                                                                datetime.datetime.today().strftime('%Y%m%d_%H%M%S'),
                                                                self.dbid)
    # Create a mirror based on the primary
    cmd = PgBaseBackup(pgdata=self.datadir,                # <---PgBaseBackup是pg.py下面的对象,用于拼接、生成最后的pg_basebackup执行命令
                       host=self.syncWithSegmentHostname,
                       port=str(self.syncWithSegmentPort),
                       replication_slot_name=self.replicationSlotName,
                       forceoverwrite=self.forceoverwrite,
                       target_gp_dbid=self.dbid,
                       logfile=self.progressFile)
    try:
        logger.info("Running pg_basebackup with progress output temporarily in %s" % self.progressFile)
        cmd.run(validateAfter=True)              # <---这里的run方法会执行PgBaseBackup的修复动作,run方法本身继承自base.py中的Command类
        self.set_results(CommandResult(0, '', '', True, False))
        if shouldDeleteProgressFile:
            os.remove(self.progressFile)
    except Exception, e:
        self.set_results(CommandResult(1,'',e,True,False))
                        raise
----------------------------

pg.py

PgBaseBackup对象的代码如下,主要是用于生成pg_basebackup的实际执行命令:

# pg.py

----------------------------
class PgBaseBackup(Command):
    def __init__(self, pgdata, host, port, replication_slot_name=None, excludePaths=[], ctxt=LOCAL, remoteHost=None, forceoverwrite=False, target_gp_dbid=0, logfile=None,
                 recovery_mode=True):
        cmd_tokens = ['pg_basebackup', '-c', 'fast']
        cmd_tokens.append('-D')
        cmd_tokens.append(pgdata)
        cmd_tokens.append('-h')
        cmd_tokens.append(host)
        cmd_tokens.append('-p')
        cmd_tokens.append(port)
        cmd_tokens.extend(self._xlog_arguments(replication_slot_name))

        if forceoverwrite:
            cmd_tokens.append('--force-overwrite')

        if recovery_mode:
            cmd_tokens.append('--write-recovery-conf')

        # This is needed to handle Greenplum tablespaces
        cmd_tokens.append('--target-gp-dbid')
        cmd_tokens.append(str(target_gp_dbid))

        # We exclude certain unnecessary directories from being copied as they will greatly
        # slow down the speed of gpinitstandby if containing a lot of data
        if excludePaths is None or len(excludePaths) == 0:
            cmd_tokens.append('-E')
            cmd_tokens.append('./db_dumps')
            cmd_tokens.append('-E')
            cmd_tokens.append('./gpperfmon/data')
            cmd_tokens.append('-E')
            cmd_tokens.append('./gpperfmon/logs')
            cmd_tokens.append('-E')
            cmd_tokens.append('./promote')
        else:
            for path in excludePaths:
                cmd_tokens.append('-E')
                cmd_tokens.append(path)

        cmd_tokens.append('--progress')
        cmd_tokens.append('--verbose')

        if logfile:
            cmd_tokens.append('> %s 2>&1' % pipes.quote(logfile))

        cmd_str = ' '.join(cmd_tokens)

        self.command_tokens = cmd_tokens

        Command.__init__(self, 'pg_basebackup', cmd_str, ctxt=ctxt, remoteHost=remoteHost)

    @staticmethod
    def _xlog_arguments(replication_slot_name):
        if replication_slot_name:
            return ["--slot", replication_slot_name, "--xlog-method", "stream"]
        else:
            return ['--xlog']
                        raise
----------------------------

GP Python工具线程池

GP的python多线程是通过队列实现的,需要并行运行的线程put到队列中,每完成一个后调用task_done()方法,当队列不为空时会阻塞,当队列中的线程为空后继续运行主线程。因此GP中的python多线程无法异步运行。

WorkerPool

创建线程池时会调用WorkerPool类,代码如下:

# base.py 

--------------------------
class WorkerPool(object):
    """TODO:"""

    halt_command = 'halt command'

    def __init__(self, numWorkers=16, items=None, daemonize=False, logger=gplog.get_default_logger()):
        if numWorkers <= 0:
            raise Exception("WorkerPool(): numWorkers should be greater than 0.")
        self.workers = []
        self.should_stop = False
        self.work_queue = Queue()
        self.completed_queue = Queue()
        self._assigned = 0
        self.daemonize = daemonize
        self.logger = logger

        if items is not None:
            for item in items:
                self.addCommand(item)

        for i in range(0, numWorkers):
            w = Worker("worker%d" % i, self)
            self.workers.append(w)
            w.start()
        self.numWorkers = numWorkers
        self.logger.debug("WorkerPool() initialized with %d workers" % self.numWorkers)

    ###
    def getNumWorkers(self):
        return self.numWorkers

    def getNextWorkItem(self):
        return self.work_queue.get(block=True)

    def addFinishedWorkItem(self, command):
        self.completed_queue.put(command)
        self.work_queue.task_done()

    def markTaskDone(self):
        self.work_queue.task_done()

    def addCommand(self, cmd):
        self.logger.debug("Adding cmd to work_queue: %s" % cmd.cmdStr)
        self.work_queue.put(cmd)
        self._assigned += 1

    def _join_work_queue_with_timeout(self, timeout):
        """
        Queue.join() unfortunately doesn't take a timeout (see
        https://bugs.python.org/issue9634). Fake it here, with a solution
        inspired by notes on that bug report.

        XXX This solution uses undocumented Queue internals (though they are not
        underscore-prefixed...).
        """
        done_condition = self.work_queue.all_tasks_done
        done_condition.acquire()
        try:
            while self.work_queue.unfinished_tasks:
                if (timeout <= 0):
                    # Timed out.
                    return False

                start_time = time.time()
                done_condition.wait(timeout)
                timeout -= (time.time() - start_time)
        finally:
            done_condition.release()

        return True

    def join(self, timeout=None):
        """
        Waits (up to an optional timeout) for the worker queue to be fully
        completed, and returns True if the pool is now done with its work.

        A None timeout indicates that join() should wait forever; the return
        value is always True in this case. Zero and negative timeouts indicate
        that join() will query the queue status and return immediately, whether
        the queue is done or not.
        """
        if timeout is None:
            self.work_queue.join()
            return True

        return self._join_work_queue_with_timeout(timeout)

    def joinWorkers(self):
        for w in self.workers:
            w.join()

    def _pop_completed(self):
        """
        Pops an item off the completed queue and decrements the assigned count.
        If the queue is empty, throws Queue.Empty.
        """
        item = self.completed_queue.get(False)
        self._assigned -= 1
        return item

    def getCompletedItems(self):
        completed_list = []
        try:
            while True:
                item = self._pop_completed() # will throw Empty
                if item is not None:
                    completed_list.append(item)
        except Empty:
            return completed_list

    def check_results(self):
        """ goes through all items in the completed_queue and throws an exception at the
            first one that didn't execute successfully

            throws ExecutionError
        """
        try:
            while True:
                item = self._pop_completed() # will throw Empty
                if not item.get_results().wasSuccessful():
                    raise ExecutionError("Error Executing Command: ", item)
        except Empty:
            return

    def empty_completed_items(self):
        while not self.completed_queue.empty():
            self._pop_completed()

    def isDone(self):
        # TODO: not sure that qsize() is safe
        return (self.assigned == self.completed_queue.qsize())

    @property
    def assigned(self):
        """
        A read-only count of the number of commands that have been added to the
        pool. This count is only decremented when items are removed from the
        completed queue via getCompletedItems(), empty_completed_items(), or
        check_results().
        """
        return self._assigned

    @property
    def completed(self):
        """
        A read-only count of the items in the completed queue. Will be reset to
        zero after a call to empty_completed_items() or getCompletedItems().
        """
        return self.completed_queue.qsize()

    def haltWork(self):
        self.logger.debug("WorkerPool haltWork()")
        self.should_stop = True
        for w in self.workers:
            w.haltWork()
            self.work_queue.put(self.halt_command)

--------------------------

Worker

在python代码,经常看到run方法的使用,该方法是继承自base.py中Worker对象的run方法,具体代码如下:

# base.py 

--------------------------
class Worker(Thread):
    """TODO:"""
    pool = None
    cmd = None
    name = None
    logger = None

    def __init__(self, name, pool):
        self.name = name
        self.pool = pool
        self.logger = logger
        Thread.__init__(self)
        self.daemon = pool.daemonize

    def run(self):
        while True:
            try:
                try:
                    self.cmd = self.pool.getNextWorkItem()
                except TypeError:
                    # misleading exception raised during interpreter shutdown
                    return

                # we must have got a command to run here
                if self.cmd is None:
                    self.logger.debug("[%s] got a None cmd" % self.name)
                    self.pool.markTaskDone()
                elif self.cmd is self.pool.halt_command:
                    self.logger.debug("[%s] got a halt cmd" % self.name)
                    self.pool.markTaskDone()
                    self.cmd = None
                    return
                elif self.pool.should_stop:
                    self.logger.debug("[%s] got cmd and pool is stopped: %s" % (self.name, self.cmd))
                    self.pool.markTaskDone()
                    self.cmd = None
                else:
                    self.logger.debug("[%s] got cmd: %s" % (self.name, self.cmd.cmdStr))
                    self.cmd.run()
                    self.logger.debug("[%s] finished cmd: %s" % (self.name, self.cmd))
                    self.pool.addFinishedWorkItem(self.cmd)
                    self.cmd = None

            except Exception, e:
                self.logger.exception(e)
                if self.cmd:
                    self.logger.debug("[%s] finished cmd with exception: %s" % (self.name, self.cmd))
                    self.pool.addFinishedWorkItem(self.cmd)
                    self.cmd = None

    def haltWork(self):
        self.logger.debug("[%s] haltWork" % self.name)

        # this was originally coded as
        #
        #    if self.cmd is not None:
        #        self.cmd.interrupt()
        #        self.cmd.cancel()
        #
        # but as observed in MPP-13808, the worker thread's run() loop may set self.cmd to None
        # past the point where the calling thread checks self.cmd for None, leading to a curious
        # "'NoneType' object has no attribute 'cancel' exception" which may prevent the worker pool's
        # haltWorkers() from actually halting all the workers.
        #
        c = self.cmd
        if c is not None and isinstance(c, Command):
            c.interrupt()
            c.cancel()
--------------------------

Python queue模块方法解释

  • init方法:实例化方法,默认值参数maxsize(默认值是0表示为不限长度)用来指定队列长度。
  • get方法,从队列中取数据:
    1. 参数有2个,参数一block(可选,默认为True)用来指定是否阻塞,参数二timeout(可选,默认为None)用来指定超时阀值 ,该方法默认状态会阻塞当前进程直至取出数据。
    2. 置block为True且timeout为正浮点数时表示该方法会阻塞当前进程timeout秒,超时后会抛queue.Empty异常。
    3. 设置block为false时即表示该方法不阻塞,进程会直接取数据,若队列空会抛queue.Empty异常。另外当block为false时timeout参数失效。
  • get_nowait方法,功能等价于get(False)即不阻塞取数据。
  • put方法,将数据放入队列:
    1. 参数有3个:参数一item(必选,无默认值)是要存放的数据,参数二block(可选,默认为True)用来指定是否阻塞,参数三timeout(可选,默认为None)用来指定超时阀值 。若给定长队列存放数据遇到队列满时该方法会阻塞当前进程直至存入数据,若给不定长队列存放数据虽不会遇到阻塞问题但存在撑爆内存的可能。
    2. 设置block为True且timeout为正浮点数时表示该方法会阻塞当前进程timeout秒,超时后会抛queue.Full异常。
    3. 设置block为false时即表示该方法不阻塞,进程会直接存数据,如果队列满会抛queue.Full异常。另外当block为false时timeout参数失效。
  • put_nowait方法:参数item为要存放的数据,功能等价于put(item, False)即不阻塞存数据。
  • empty方法:判断队列是否空,若为空返回True,若不空返回False。请注意多线程竞争时存在判断为空的瞬间另一线程put后实际不空的可能。
  • full方法:用来判断队列是否满,若已满返回True,若不满返回False。另外不定长队列永远返回False。请注意多线程竞争时存在刚判断为满的瞬间另一线程get后实际不满的可能。
  • qsize方法:返回队列的大致大小。请注意由于多线程竞争会导致qsize()>0时get()仍可能被阻塞、qsize() < maxsize时put()仍可能被阻塞。
  • taskdone方法:该方法是消费者使用的,通知生产者队列已被取走一个消息。该方法必须配合join方法使用。
  • join方法:该方法是生产者使用的,阻塞进程直到队列空。该方法必须配合takedone方法使用。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值