基于生产-消费者模式的任务异步线程池设计与实现

      昨天上来看看,看到有个童鞋发了篇关于线程池的实现的帖子,也引来了不少讨论。呵呵,初步看了下,那个线程池的设计与实现还是比较初级,并且存在的问题还是蛮多的。刚好,前两者,为一个项目设计实现了一个基于生产-消费者模式的任务异步处理组件,中间用到了线程池,发上来,供一些童鞋学习参考下。

      有些童鞋可能会说,在JDK1.5后就带了ExecutorService这样的线程池,干嘛还自己实现啊?这里,我就先简单说一下背景情况和设计的思路先。

 1. JDK的ExecutorService中的线程池只是提供了一些基础的实现,进入线程池的任务一般有两种行为:阻塞或者激活新的线程,前者是对于fixedThreadPool而言,而后者是对于cachedTreadPool。而项目需要的是一个具有伸缩性的。这里包括两个方面,一个是伸缩性的工作线程池(可以根据情况对线程池进行自我调节),二是伸缩性的任务队列(具有固定的大小,多余的任务会被暂时转储,而队列空闲至一个阈值时,从恢复转储的任务)。Commons-pool其实实现了第一个需求,但是它在设计上是存在一些问题,并不太适合用于线程池的管理(改篇可以再行讨论)。

2. 对于任务队列好,对于工作线程好,一个具有良好设计组件的前提是还有这么几个需求的:它自身是可以被Audit,也就是它的性能,实际工作质量是可以被检查和评估的;它的行为是可以被扩展的;它必须是健壮的并且可控的。所以,除了生产-消费者的线程池外,还必须有一些管理线程,它们必须能够良好地反馈和控制线程池;其次对于整个组件的活动必须定义一套事件以及事件监听机制,其目标一是能做到组件状态的自省,二是能提供客户端的扩展(比如实现动态任务链等)。

 

    好了,谈了背景,先简单减少一下组件中几个主要角色的构成吧(因为整个组件约有60个类,所以不可能全部说明和贴出来):

1. WorkEngine: 除了继承Switchable这一一个控制开关外,它包括三个主要组成部分和三个扩展部分。

    三个组成部分也就是组件的核心:任务队列、工作线程代理、任务结果处理队列。

    三个扩展部分分别是:配置、持久化接口以及控制钩子接口。

 

public interface Switchable {

    void cancelWork() ;

    String getId() ;

    boolean isStartForWork() ;

    void startWork() ;

    void stopWork() ;
}

public interface WorkEngine extends Switchable {

    void addControlHook(Switchable hook) ;

    WorkConfiguration getConfiguration() ;

    Persistence getPersistence() ;

    TaskReportQueue getReportQueue() ;

    TaskQueue getTaskQueue() ;

    WorkerBroker getWorkerBroker() ;

}

 

 2. 下面对三个主要的部件接口简单说明下:

    首先是任务队列TaskQueue,相对于传统的队列,增加了事件监听器和任务优先级重处理。

 

public interface TaskQueue extends Iterable<Task> {

    void addEventListener(TaskEventListener listener) ;

    /**
     * add a new task to tail of queue
     * @param task
     */
    void addTask(Task task) ;

    /**
     * check whether existing task in queue.
     * @return
     */
    boolean existTask() ;

    /**
     * sort tasks in queue according to priority
     */
    void sequence() ;

    /**
     * remove the task at the head of queue
     */
    Task removeTask() ;

    /**
     * remove the indicated task in queue
     * @param task
     */
    void removeTask(Task task) ;

    int size() ;

    int capacity() ;

}

 

    其次是工作线程代理(有个不错的名字:包工头)以及工作线程接口(也就是工人):

 

public interface WorkerBroker {

    void fireWorker();

    void fireWorker(Worker worker);

    int getIdleWorkers();

    int getWorkingWorkers();

    int getMaxWorkers();

    boolean hireWorker(Worker worker);

    Worker require(boolean overdue);

    boolean returnBack(Worker worker);

    void setWorkerFactory(WorkerFactory factory) ;

    void addEventListener(WorkerEventListener listener) ;
}

public interface Worker extends Switchable {

    boolean isAvailable();

    boolean isStartForWork() ;

    void setTaskEventNotifier(TaskEventNotifier notifier);

    void setWorkerEventNotifier(WorkerEventNotifier notifier);

    void work(Task task);

    int getNumOfExecutedTasks() ;
}

 

 最后是任务报告队列,用于处理每个异步任务执行结果的报告

 

public interface TaskReportQueue extends Iterable<TaskExecutionReport> {

    void submitReport(TaskExecutionReport report) ;

    int capacity();

    int size() ;
}

 

 3. 最后贴出组件配置的接口,这样大家也清晰有哪些是可配置的

 

public interface WorkConfiguration {

    /**
     * when the size of task queue is less than a specific threshold
     * engine will activate task from persistence.
     * the default threshold is getTaskQueueCapacity * 0.8.
     * if return value is 0, will consider to use default setting.
     * @return
     */
    int getActivateThreshold() ;

    /**
     *
     * @return map key is group name, value is collection of task category
     */
    Map<String, List<String>> getTaskGroupInformation() ;

    Map<String, Map<String, Class<? extends TaskContextConverter>>> getDefinedConverters() ;

    /**
     * the intial size of worker queue
     * @return
     */
    int getInitNumberofWorkers();

    /**
     * get the interval to check whether existing persistented tasks.
     * the unit is second.
     * The default value is 300.
     * @return
     */
    int getIntervalToCheckPersistence() ;

    /**
     * the waitting time to require a worker from broker.
     * @return
     */
    int getLatencyTimeForWorker();

    /**
     * get apache common logging instance.
     * @return
     */
    Log getLog();

    /**
     * the size of worker queue
     * @return
     */
    int getMaxNumberofWorkers();

    /**
     * when found a exception (not user thrown), max retry times.
     * @return
     */
    int getMaxRetryTimesWhenError() ;

    /**
     * the number of result processor thread
     * @return
     */
    int getNumberofResultProcessors();

    /**
     * 0 is unlimited, 1 is single thread, positive is a fixed thread pool, negative is illegal
     * @return
     */
    int getNumberofServiceThreads() ;

    Class<? extends Persistence> getPersistenceClass();

    /**
     * the task will promote the priority after a specific time elapsed.
     * the time unit is second.
     * @return specific time
     */
    int getPriorityAdjustmentThreshold() ;

    /**
     * the size of task result queue
     * @return
     */
    int getResultQueueCapacity();

    List<Class<? extends TaskEventListener>> getTaskEventListeners();

    Map<String, Map<TaskProcessLifeCycle, Class<?>>> getTaskCategoryInformation();
    /**
     * the overdue setting of executing a task
     * the time unit is second
     * @return
     */
    int getTaskOverdue();

    /**
     * the size of task queue
     * @return
     */
    int getTaskQueueCapacity();


    List<Class<? extends WorkerEventListener>> getWorkerEventListeners();

    /**
     * whether the system can automatically support the time based priority adjustment of task .
     * @return
     */
    boolean supportPriorityAdjustment() ;

    /**
     * when worker queue is emtpy, whether to use backup worker.
     * @return
     */
    boolean useBackupWorker();

}
 

 

 

好了,列出上面几个重要的接口,下面也就是大家最关心的实现了。在实现是基于JDK1.5,所以用到了Queue, Atomic等。

首先是TaskQueueImpl

 

public final class TaskQueueImpl extends ControlSwitcher implements TaskQueue, TaskEventNotifier, WorkerBrokerAware {
    /*
     * *
     * @uml.property name="broker"
     * @uml.associationEnd
     */
    private WorkerBroker broker;

    private int capacity ;

    /*
     * *
     * @uml.property name="chain"
     * @uml.associationEnd
     */
    private TaskEventListenerChain chain = new TaskEventListenerChain();

    private Log logger ;

    private BlockingQueue<Task> queue;

    private int threshold ;

    private long lastCheckTime ;

    private long intervalToCheck ;

    private AtomicBoolean existPersistentTask = new AtomicBoolean(false);

    public TaskQueueImpl() {
        super();
    }

    void activateTasks(int size) {
        if (!existPersistentTask.get()) {
            if (lastCheckTime <= 0L || System.currentTimeMillis() - lastCheckTime <= intervalToCheck) {
                return ;
            }
        }
        logger.info("the queue is available, expect to activate ["+size+"] tasks.");
        lastCheckTime = System.currentTimeMillis() ;
        Collection<Task> tasks = getEngine().getPersistence().activate(size);
        logger.info("actual activated "+tasks.size()+" tasks.");
        for (Task task : tasks) {
            putInQueue(task);
        }
        if (tasks.size() < size) {
            existPersistentTask.set(false);
        }
    }

    public void addEventListener(TaskEventListener listener) {
        chain.addListener(listener);
    }


    public void addTask(Task task) {
        if (queue.size() > capacity) {
            passivateTask(task);
            return;
        }
        putInQueue(task);
    }

    private void cancelTask(Task task) {
        task.setState(TaskState.OUT_OF_QUEUE);
        chain.onEvent(task, TaskEvent.CANCEL);
    }

    public boolean existTask() {
        return queue.size() > 0;
    }

    @Override
    public Runnable getManagementThread(final WorkConfiguration config) {
        final TaskEventNotifier notifier = this ;
        final WorkerBroker broker = this.broker ;
        final Log logger = this.logger ;
        final BlockingQueue<Task> queue = this.queue ;
        final int capacity = this.capacity ;
        final int threshold = this.threshold ;
        return new Runnable() {
            public void run() {
                while (isStartForWork()) {
                    Worker worker = broker.require(true);
                    if (worker == null) {
                        logger.warn("can not require any worker in specific time.");
                        worker = broker.require(false);
                    }
                    if (logger.isDebugEnabled()) {
                        logger.info("required a worker[id="+worker.getId()+"].");
                    }
                    Task task = null ;
                    try {
                        if (queue.size() <= threshold) {
                            activateTasks(capacity - threshold);
                        }
                        task = queue.take();
                    } catch (InterruptedException e) {
                        logger.warn("The task queue is interrupted, engine is still start for work[" + isStartForWork()
                                        + "].");
                    }
                    if (task == null) {
                        logger.warn("no found any task in queue in specific time, will return back worker.");
                        broker.returnBack(worker);
                        continue ;
                    }

                    task.setOutQueueTime(DateFormatter.now());
                    try {
                        if (!worker.isStartForWork()) {
                            if (logger.isDebugEnabled()) {
                                logger.info("start worker[id="+worker.getId()+"]");
                            }
                            worker.startWork() ;
                        }
                        worker.setTaskEventNotifier(notifier);
                        worker.work(task);
                    } catch (Throwable th) {
                        logger.warn("found exception when the worker[id="+worker.getId()+"] start to work.",th);
                        broker.returnBack(worker);
                    }
                }
            }
        };
    }

    public Iterator<Task> iterator() {
        Task[] ts =queue.toArray(new Task[queue.size()]);
        return Arrays.asList(ts).iterator() ;
    }

    public void notifyTaskEvent(Task task, TaskEvent event) {
        if (event == TaskEvent.FAIL) {
            int max = getConfiguration().getMaxRetryTimesWhenError();
            int current = task.getRetriedTimes();
            if (max > current) {
                task.setRetriedTimes(current+1);
                addTask(task);
            } else {
                logger.warn("the task " + task + " has retried " + current + " times, will be skipped.");
            }
        }
        chain.onEvent(task, event);
    }

    private void passivateTask(Task task) {
        getEngine().getPersistence().passivate(task);
        existPersistentTask.set(true) ;
        task.setState(TaskState.OUT_OF_QUEUE);
        chain.onEvent(task, TaskEvent.PERSISTENT);
    }

    public void postInitial() {
        capacity = getConfiguration().getTaskQueueCapacity();
        threshold = getConfiguration().getActivateThreshold();
        if (threshold <= 0 || threshold >= capacity) {
            Double dou = Double.valueOf(capacity * 0.8);
            threshold = dou.intValue() ;
        }

        queue = new PriorityBlockingQueue<Task>(capacity,
                                                new PriorityBasedComparator());
        chain.setService(getEngine().getExecutorService());
        logger = getConfiguration().getLog() ;
        intervalToCheck = getConfiguration().getIntervalToCheckPersistence() * 1000 ;
    }

    void putInQueue(Task task) {
        task.setInQueueTime(DateFormatter.now());
        if (queue.offer(task)) {
            task.setState(TaskState.WAITING);
            chain.onEvent(task, TaskEvent.IN_QUEUE);
        } else {
            logger.warn("fail to put task " + task + " in queue.");
        }

    }

    public Task removeTask() {
        Task task = queue.poll();
        if (task != null) {
            task.setOutQueueTime(DateFormatter.nowDate());
            cancelTask(task);
        } else {
            if (logger.isDebugEnabled()) {
                logger.info("no task in queue.");
            }
        }
        return task ;
    }

    public void removeTask(Task task) {
        if (queue.remove(task)) {
            task.setOutQueueTime(DateFormatter.nowDate());
            cancelTask(task);
        } else {
            logger.warn("remove task from queue failed.");
        }
    }

    /*
     * *
     * @param broker
     * @uml.property name="broker"
     */
    public void setBroker(WorkerBroker broker) {
        this.broker = broker;
    }

    public int capacity() {
        return capacity;
    }

    public int size() {
        return queue.size();
    }


}
 

 

接下来是AbstractWorker 和WorkerBrokerImpl

public abstract class AbstractWorker extends ControlSwitcher implements Worker, WorkerBrokerAware {

    private Log logger ;

    /*
     * *
     * @uml.property name="taskEventNotifier"
     * @uml.associationEnd
     */
    private TaskEventNotifier taskEventNotifier ;

    /*
     * *
     * @uml.property name="workerEventNotifier"
     * @uml.associationEnd
     */
    private WorkerEventNotifier workerEventNotifier ;

    /*
     * *
     * @uml.property name="broker"
     * @uml.associationEnd
     */
    private WorkerBroker broker ;

    private final AtomicBoolean available = new AtomicBoolean(true) ;

    private final AtomicInteger executedTasks = new AtomicInteger(0);

    protected void doWork(Task task) {
        available.set(false) ;
        task.setExecuteTime(DateFormatter.now());
        TaskExecutionReport report = new TaskExecutionReport(task);
        TaskProcessDescriptor desc = getEngine().getTaskCategoryInformation(task.getCategory());
        long start = System.currentTimeMillis();

        try {
            if (desc == null) {
                throw new IllegalArgumentException("the task category["+task.getCategory()+"] doesn't register.");
            }
            TaskEventNotifier ten = getTaskEventNotifier();
            TaskValidator validator = desc.getValidator();
            if (validator != null) {
                ValidationResult vr = validator.validate(task);
                if (!vr.isValid()) {
                    InvalidTaskResult defaultResult = new InvalidTaskResult(task, vr) ;
                    report.setResult(defaultResult);
                    getLogger().warn("the task" + task + " is invalid, caused by "+vr.getMessage());
                    if (ten != null) {
                        ten.notifyTaskEvent(task, TaskEvent.INVALID);
                    }
                    getEngine().getReportQueue().submitReport(report);
                    return ;
                }
            }
            report.setExecuteTime(new Date(start));
            TaskResult result = desc.getExecutor().execute(task);
            report.setResult(result);
            report.setCost(System.currentTimeMillis() - start);

            if (ten != null) {
                if (report.getCost() > getConfiguration().getTaskOverdue() * 1000) {
                    ten.notifyTaskEvent(task, TaskEvent.OVERDUE);
                } else {
                    ten.notifyTaskEvent(task, TaskEvent.FINISH);
                }
            }
            getEngine().getReportQueue().submitReport(report);
        } catch (Throwable exp) {
            handleException(report, exp);
        } finally {
            executedTasks.addAndGet(1);
            available.set(true);
        }
    }


    private void handleException(TaskExecutionReport report, Throwable exp) {
        Task task = report.getTask() ;
        TaskEventNotifier ten = getTaskEventNotifier();
        UnexpectedTaskResult defaultResult = new UnexpectedTaskResult(task);
        defaultResult.setException(exp);
        report.setResult(defaultResult);
        getLogger().error("found exception when executing task " + task, exp);
        if (ten != null) {
            getTaskEventNotifier().notifyTaskEvent(task, TaskEvent.FAIL);
        }
    }

    /*
     * *
     * @return
     * @uml.property name="logger"
     */
    public Log getLogger() {
        return logger ;
    }

    /*
     * *
     * @return
     * @uml.property name="taskEventNotifier"
     */
    TaskEventNotifier getTaskEventNotifier() {
        return taskEventNotifier;
    }

    /*
     * *
     * @return
     * @uml.property name="workerEventNotifier"
     */
    WorkerEventNotifier getWorkerEventNotifier() {
        return workerEventNotifier;
    }

    public void postInitial() {
        logger = getConfiguration().getLog() ;
    }

    /*
     * *
     * @param notifier
     * @uml.property name="taskEventNotifier"
     */
    public void setTaskEventNotifier(TaskEventNotifier notifier) {
        this.taskEventNotifier = notifier ;
    }

    /*
     * *
     * @param notifier
     * @uml.property name="workerEventNotifier"
     */
    public void setWorkerEventNotifier(WorkerEventNotifier notifier) {
        this.workerEventNotifier = notifier ;
    }

    public boolean isAvailable() {
        return available.get();
    }

    /*
     * *
     * @return
     * @uml.property name="broker"
     */
    public WorkerBroker getBroker() {
        return broker;
    }

    /*
     * *
     * @param broker
     * @uml.property name="broker"
     */
    public void setBroker(WorkerBroker broker) {
        this.broker = broker;
    }

    public int getNumOfExecutedTasks() {
        return executedTasks.get();
    }

    @Override
    public synchronized void startWork() {
        super.startWork();
        executedTasks.set(0);
    }

}
 

 

public class WorkerBrokerImpl extends ControlSwitcher implements WorkerBroker, WorkerEventNotifier {

    private AtomicReference<Worker> agent = new AtomicReference<Worker>();

    /*
     * *
     * @uml.property name="chain"
     * @uml.associationEnd
     */
    private WorkerEventListenerChain chain = new WorkerEventListenerChain();

    /*
     * *
     * @uml.property name="factory"
     * @uml.associationEnd
     */
    private WorkerFactory factory;

    private AtomicReference<Long> firstAssign = new AtomicReference<Long>(0L);

    private Log logger ;

    private BlockingQueue<Worker> queue;

    private Semaphore sema ;

    private BlockingQueue<WorkerTracker> workingQueue ;

    public WorkerBrokerImpl() {
        super();
    }

    public void addEventListener(WorkerEventListener listener) {
        chain.addListener(listener);
    }

    /*
     *
     * @see com.oocllogistics.comp.workengine.worker.WorkerBroker#fireWorker()
     */
    public void fireWorker() {
        int balance = queue.size() ;
        int using = sema.getQueueLength();
        int initWorkers = getConfiguration().getInitNumberofWorkers();
        if (balance + using > initWorkers && using < initWorkers) {
            int fireSize = balance + using - initWorkers ;
            for (int i = 0; i < fireSize; i++) {
                Worker worker = queue.poll();
                if (worker == null) {
                    break;
                }
                if (logger.isDebugEnabled()) {
                    logger.info("fire a worker[id="+worker.getId()+"] from queue.");
                }
                if (worker.isStartForWork()) {
                    logger.info("stop work of worker[id="+worker.getId()+"].");
                    worker.stopWork();
                }
                chain.onEvent(worker, WorkerEvent.FIRE);
            }
        }
    }

    public void fireWorker(Worker worker) {
        if (worker == null) {
            return ;
        }
        queue.remove(worker);
        removeFromWorkingQueue(worker);
        if (logger.isDebugEnabled()) {
            logger.info("fire a worker[id="+worker.getId()+"] from queue.");
        }
        if (worker.isStartForWork()) {
            logger.info("stop work of worker[id="+worker.getId()+"].");
            worker.stopWork();
        }
        chain.onEvent(worker, WorkerEvent.FIRE);
    }

    /*
     *
     * @see com.oocllogistics.comp.workengine.worker.WorkerBroker#getIdleWorkers()
     */
    public int getIdleWorkers() {
        return queue.size();
    }

    @Override
    public Runnable getManagementThread(final WorkConfiguration config) {
        final long max = config.getTaskOverdue() * 1000 ;
        final int initial = getConfiguration().getInitNumberofWorkers() ;
        final AtomicReference<Long> firstAssign = this.firstAssign ;
        final BlockingQueue<WorkerTracker> workingQueue = this.workingQueue ;
        final Log logger = this.logger ;
        final WorkerEventListenerChain chain = this.chain ;
        return new Runnable() {
            public void run() {
                long lastFlag = System.currentTimeMillis() ;
                while (isStartForWork()) {
                    try {
                        long current = System.currentTimeMillis() ;
                        if (current - lastFlag > max && getIdleWorkers() > initial) {
                            fireWorker();
                            lastFlag = System.currentTimeMillis() ;
                        }
                        long startTime = firstAssign.get() ;
                        while (startTime == 0L) {
                            Thread.sleep(max);
                            startTime = firstAssign.get() ;
                        }
                        final long interval = current - firstAssign.get();
                        if (interval >= max) {
                            WorkerTracker tracker = workingQueue.poll();
                            if (tracker != null) {
                                Worker worker = tracker.getWorker();
                                logger.warn("the worker["+worker.getId()+"] is overdue, remove from working queue.");
                                removeFromWorkingQueue(worker);
                                chain.onEvent(worker, WorkerEvent.OVERDUE);
                            }
                        }

                        if (max > interval) {
                            Thread.sleep(max - interval);
                        }
                    } catch (InterruptedException e) {
                        String msg = "the worker broker is interrupted, " +
                        		"engine is still start for work[" + isStartForWork() + "].";
                        logger.warn(msg);
                    }
                }
            }
        };
    }

    /*
     *
     * @see com.oocllogistics.comp.workengine.worker.WorkerBroker#getMaxWorkers()
     */
    public int getMaxWorkers() {
        return getConfiguration().getMaxNumberofWorkers() ;
    }

    /*
     * @see
     * com.oocllogistics.comp.workengine.worker.WorkerBroker#hireWorker(com.oocllogistics.comp.workengine.worker.Worker)
     */
    public boolean hireWorker(Worker worker) {
        if (worker == null || !worker.isAvailable() || worker.isStartForWork()) {
            return false;
        }
        if (worker instanceof StandardWorker) {
            logger.info("hire a new worker[id="+worker.getId()+"] into queue.");
            return returnBack(worker);
        }
        if (worker instanceof SpecialWorker) {
            if (agent.get() == null) {
                agent.set(worker);
                logger.info("hire a special worker[id="+worker.getId()+"] as backup.");
                return true ;
            }
        }
        return false ;
    }

    public void notifyWorkerEvent(Worker worker, WorkerEvent event) {
        boolean isReturned = false ;
        switch (event) {
            case CANCEL_WORK:
                isReturned = returnBack(worker);
                break;
            case OVERDUE:
                isReturned = returnBack(worker);
                break;
            case SUBMIT_TASK:
                isReturned = returnBack(worker);
                break;
            default:
                isReturned = true ;
                break;
        }
        if (!isReturned) {
            logger.warn("return back worker[id="+worker.getId()+"] failed.");
        }
        chain.onEvent(worker, event);
    }

    /*
     *
     * @see com.oocllogistics.comp.workengine.impl.ControlSwitcher#postInitial()
     */
    public void postInitial() {
        int init = getConfiguration().getInitNumberofWorkers();
        int total = getConfiguration().getMaxNumberofWorkers();
        sema = new Semaphore(total);
        queue = new ArrayBlockingQueue<Worker>(total);
        workingQueue = new ArrayBlockingQueue<WorkerTracker>(total);
        Collection<Worker> workerList = factory.createWorkers(init, WorkType.STANDARD_WORKER);
        for (Worker worker : workerList) {
            worker.setWorkerEventNotifier(this);
            queue.add(worker);
        }
        if (getConfiguration().useBackupWorker()) {
            Worker worker = factory.createWorker(WorkType.AGENT_WORKER);
            worker.setWorkerEventNotifier(this);
            agent.set(worker);
        }
        logger = getConfiguration().getLog() ;
        chain.setService(getEngine().getExecutorService());
    }

    void removeFromWorkingQueue(Worker worker) {
        if (worker == null) {
            return ;
        }
        synchronized (worker) {
            WorkerTracker tracker = new WorkerTracker(worker, 0L);
            if (workingQueue.remove(tracker)) {
                if (tracker != null) {
                    firstAssign.set(tracker.getStartWorkTime());
                } else {
                    firstAssign.set(0L);
                }
                if (logger.isDebugEnabled()) {
                    logger.info("remove the worker[id="+worker.getId()+"] from working queue.");
                }
            } else {
                logger.warn("failed to remove the worker[id="+worker.getId()+"] from working queue.");
            }
        }
    }

    /*
     *
     * @see com.oocllogistics.comp.workengine.worker.WorkerBroker#require(boolean)
     */
    public Worker require(boolean overdue) {
        Worker worker = null ;
        checkWhetherToAddWorker();
        try {
            if (overdue) {
                worker = queue.poll(getConfiguration().getLatencyTimeForWorker(), TimeUnit.SECONDS);
            } else {
                worker = queue.take();
            }
            worker = checkWhetherToUseBackupWorker(worker);
            sema.acquire() ;
            WorkerTracker tracker = new WorkerTracker(worker, System.currentTimeMillis());
            if (workingQueue.offer(tracker)) {
                firstAssign.compareAndSet(0L, tracker.getStartWorkTime());
                if (logger.isDebugEnabled()) {
                    logger.info("put worker[id="+worker.getId()+"] into working queue.");
                }
            } else {
                String msg = "faile to put worker[id="+worker.getId()+"] into working queue. " +
                		"It might cause a worker missing.";
                logger.warn(msg);
                chain.onEvent(worker, WorkerEvent.MISSED);
            }
            return worker ;
        } catch (InterruptedException e) {
            logger.warn("find interrupted exception when requiring a worker.", e);
            return agent.get() ;
        }
    }

    private void checkWhetherToAddWorker() {
        synchronized (queue) {
            if (queue.size() == 0 && sema.availablePermits() > 0) {
                int size = sema.availablePermits() >= 2 ? 2 : sema.availablePermits() ;
                logger.info("the worker queue is empty, will employ "+size+" workers.");
                Collection<Worker> workerCol = factory.createWorkers(size, WorkType.STANDARD_WORKER);
                for (Worker wk : workerCol) {
                    wk.setWorkerEventNotifier(this);
                }
                queue.addAll(workerCol);
            }
        }
    }

    private Worker checkWhetherToUseBackupWorker(Worker worker) {
        if (worker == null) {
            if (getConfiguration().useBackupWorker()) {
                logger.warn("can not find any avaliable worker in queue, will use backup worker.");
                return agent.get() ;
            }
            return null ;
        }
        return worker ;
    }

    /*
     * @see
     * com.oocllogistics.comp.workengine.worker.WorkerBroker#returnBack(com.oocllogistics.comp.workengine.worker.Worker)
     */
    public boolean returnBack(Worker worker) {
        if (worker == null) {
            return false;
        }
        //remove from working queue first.
        removeFromWorkingQueue(worker);
        boolean succeed = false ;
        synchronized (worker) {
            boolean exist = queue.contains(worker);
            if (exist) {
                logger.warn("the worker[id="+worker.getId()+"] is existing in worker queue.");
            } else {
                succeed = queue.offer(worker);
            }
        }
        if (succeed) {
            sema.release();
            if (logger.isDebugEnabled()) {
                logger.info("succeed to put worker[id="+worker.getId()+"] in queue.");
            }
            return true ;
        }
        logger.warn("return back the worker[id="+worker.getId()+"] failed.");
        return false ;
    }

    public void setWorkerFactory(WorkerFactory factory) {
        this.factory = factory;
    }

    public int getWorkingWorkers() {
        return workingQueue.size();
    }


}

 

   最后是TaskReportQueueImpl, 在这里,其实就是使用Java自带的线程池来实现的

 

public class TaskReportQueueImpl extends ControlSwitcher implements TaskReportQueue {

    private Log logger ;

    private BlockingQueue<TaskExecutionReport> queue ;

    private ExecutorService service ;

    public TaskReportQueueImpl() {
        super();
    }

    @Override
    public Runnable getManagementThread(final WorkConfiguration config) {
        final Log logger = this.logger ;
        final BlockingQueue<TaskExecutionReport> queue = this.queue ;
        final ExecutorService service = this.service ;
        final TaskChainProcessor processor = new TaskChainProcessor();
        processor.setEngine(getEngine());
        return new Runnable(){
            public void run() {
                while(isStartForWork()) {
                    try {
                        final TaskExecutionReport report = queue.take();
                        final String category = report.getTask().getCategory();
                        report.getTask().setReportTime(DateFormatter.now());
                        Runnable handler = new Runnable(){
                            public void run() {
                                TaskProcessDescriptor desc = null ;
                                desc = getEngine().getTaskCategoryInformation(category);
                                if (desc == null) {
                                    logger.warn("the task category["+category+"] doesn't register.");
                                    return ;
                                }
                                if (StringUtils.isNotEmpty(report.getTask().getGroup())) {
                                    processor.process(report);
                                }
                                desc.getResultProcessor().process(report);
                            }
                        };
                        service.execute(handler);
                    } catch (InterruptedException e) {
                        String msg = "the task report queue is interrupted, " +
                        		"engine is still start for work["+isStartForWork()+"].";
                        logger.warn(msg);
                    }
                }
            }
        };
    }

    public void postInitial() {
        WorkConfiguration config = getConfiguration() ;
        queue = new ArrayBlockingQueue<TaskExecutionReport>(config.getResultQueueCapacity());
        service = getEngine().getExecutorService() ;
        logger = getConfiguration().getLog() ;
    }

    /*
     * @see
     * com.oocllogistics.domestic.common.worktask.TaskReporter#submitReport(com.oocllogistics.domestic.common.worktask
     * .TaskExecutionReport)
     */
    public void submitReport(TaskExecutionReport report) {
        if (!queue.offer(report)) {
            Task task = report.getTask() ;
            logger.warn("fail to submit the task report " + task);
        }
    }

    public Iterator<TaskExecutionReport> iterator() {
        TaskExecutionReport[] reports = queue.toArray(new TaskExecutionReport[queue.size()]);
        return Arrays.asList(reports).iterator();
    }

    public int capacity() {
        return getConfiguration().getResultQueueCapacity();
    }

    public int size() {
        return queue.size();
    }



}
 

 

    核心的接口和实现贴出来了,希望对那些想学怎么写线程池的童鞋能有所帮助。

 

    测试运行最多的时候是25个客户线程(负责提交任务),20个工作线程(负责处理任务),5个任务结果处理线程以及3个管理线程。

    经过测试,下面这些测试用例是可通过的, CPU能保持在5%以下。并且Task在队列中等待和处理时间基本为0.

 

 

    //SMALLEST -- 500ms
    //SMALLER -- 1000ms
    //NORMAL -- 2000ms
    //BIGGER -- 5000ms
    //BIGGEST -- 5000ms
    //test time unit is second

    public void testStandardPerformanceInMinute() {
        concurrentTaskThread = 10 ;
        totalTasks.put(Category.SMALLEST, 400);
        totalTasks.put(Category.SMALLER, 350);
        totalTasks.put(Category.NORMAL, 250);
        testTime = 60 ;
        testPerformance();
    }

    public void testAdvancePerformanceIn2Minute() {
        concurrentTaskThread = 20 ;
        totalTasks.put(Category.SMALLEST, 300);
        totalTasks.put(Category.SMALLER, 300);
        totalTasks.put(Category.NORMAL, 200);
        totalTasks.put(Category.BIGGER, 100);
        totalTasks.put(Category.BIGGEST, 100);
        testTime = 120 ;
        testPerformance();
    }

    public void testPerformanceUnderHugeData() {
        concurrentTaskThread = 25 ;
        totalTasks.put(Category.SMALLEST, 1500);
        totalTasks.put(Category.SMALLER, 500);
        totalTasks.put(Category.NORMAL, 500);
        testTime = 120 ;
        testPerformance();
    }

 

   欢迎大家讨论....

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值