flask-apscheduler踩坑实践

本文所有内容都是基于flask框架
为了避免中间件来回应用照成地狱引用
创建一个extensions.py

from flask_sqlalchemy import  SQLAlchemy
db=SQLAlchemy()
from flask_apscheduler import  APScheduler
from apscheduler.schedulers.background import BackgroundScheduler,BlockingScheduler
scheduler=APScheduler(BackgroundScheduler())

flask settings.py脚本

import  sys
from apscheduler.jobstores.sqlalchemy import SQLAlchemyJobStore
from apscheduler.executors.pool import ThreadPoolExecutor
from apscheduler.executors.pool import ThreadPoolExecutor,ProcessPoolExecutor

class Config(object):
    DEBUG=False
    TESTING=False
    SECRET_KEY="hkkjdfsJj&*(("

class ProductionConfig(Config):
    pass
class DevelopmentConfig(Config):
    SQLALCHEMY_DATABASE_URI = 'mysql://root:test123@127.0.0.1:3306/flask_sql_demo'
    SQLALCHEMY_TRACK_MODIFICATIONS = False
    SQLALCHEMY_ECHO = False
    # mysql 数据库持久化配置
    SQLALCHEMY_DATABASE_URI2 = "mysql://root:test123@127.0.0.1:3306/apscheduler"
    SCHEDULER_API_ENABLED = True
    SCHEDULER_JOBSTORES = {
        'default': SQLAlchemyJobStore(url=SQLALCHEMY_DATABASE_URI2)
    }
    SCHEDULER_EXECUTORS = {
        'default': ThreadPoolExecutor(1),
        # 'default': ProcessPoolExecutor(5)
    }
    SCHEDULER_JOB_DEFAULTS = {
        'coalesce': False,
        'max_instances': 2,
        'misfire_grace_time': None
    }


class TestingConfig(Config):
    pass

这个里面指定了flask-apschedule存放的数据库,在数据库进行migrate之后会产生一个表

mysql> desc apscheduler_jobs;
+---------------+--------------+------+-----+---------+-------+
| Field         | Type         | Null | Key | Default | Extra |
+---------------+--------------+------+-----+---------+-------+
| id            | varchar(191) | NO   | PRI | NULL    |       |
| next_run_time | double       | YES  | MUL | NULL    |       |
| job_state     | blob         | NO   |     | NULL    |       |
+---------------+--------------+------+-----+---------+-------+
3 rows in set (0.01 sec)

看到生成了表,至于插件如何安装请自行百度安装
为了防止开启多进程的时候同一个程序多次运行,使用了一个文件锁

import  platform

import  atexit
def scheduler_init(app):
    """
    保证系统只启动一次定时任务
    :param app:
    :return:
    """
    if platform.system() != 'Windows':
        fcntl = __import__("fcntl")
        f = open('scheduler.lock', 'wb')
        try:
            fcntl.flock(f, fcntl.LOCK_EX | fcntl.LOCK_NB)
            scheduler.init_app(app)
            scheduler.start()
            app.logger.debug('Scheduler Started,---------------')
        except:
            pass

        def unlock():
            fcntl.flock(f, fcntl.LOCK_UN)
            f.close()

        atexit.register(unlock)
    else:
        msvcrt = __import__('msvcrt')
        f = open('scheduler.lock', 'wb')
        try:
            msvcrt.locking(f.fileno(), msvcrt.LK_NBLCK, 1)
            scheduler.init_app(app)
            scheduler.start()
            app.logger.debug('Scheduler Started,----------------')
        except:
            pass

        def _unlock_file():
            try:
                f.seek(0)
                msvcrt.locking(f.fileno(), msvcrt.LK_UNLCK, 1)
            except:
                pass

        atexit.register(_unlock_file)

安装flask-apscheduler之后选择采用这个模块网页功能
SCHEDULER_API_ENABLED = True
在这里插入图片描述
接口相关代码:

def my_job():
    print(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S'),threading.get_ident(),threading.current_thread().getName(),os.getpid())
    print("============开始休息10S=====================")
    time.sleep(10)
    print(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S'),threading.get_ident(),threading.current_thread().getName(),os.getpid())
    print("============继续休息10S=====================")
    time.sleep(10)
    print(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S'),threading.get_ident(),threading.current_thread().getName(),os.getpid())
    print("============继续休息10S=====================")
    time.sleep(10)
    print(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S'),threading.get_ident(),threading.current_thread().getName(),os.getpid())
    print("================休息结束=====================")
@aps.route('/cron/add', methods=['GET','POST'])
def CrontabAdd():
        print(request.form)
        id = request.form.get('id')
        trigger_type = request.form.get('trigger_type')
        if trigger_type == "date":
            run_time = request.form.get('run_time')
            job = scheduler.add_job(func=my_job,
                                    trigger=trigger_type,
                                    run_date=run_time,
                                    replace_existing=True,
                                    coalesce=False)
            print("添加一次性任务成功---[ %s ] " % id)
        elif trigger_type == 'interval':
            seconds = request.form.get('interval_time')
            seconds = int(seconds)
            if seconds <= 0:
                raise TypeError('请输入大于0的时间间隔!')
            scheduler.add_job(func=my_job,
                              trigger=trigger_type,
                              seconds=seconds,
                              replace_existing=True,
                              coalesce=False,
                              id=id)
        elif trigger_type == "cron":
            day_of_week = request.form.get("run_time")["day_of_week"]
            hour = request.form.get("run_time")["hour"]
            minute = request.form.get("run_time")["minute"]
            second = request.form.get("run_time")["second"]
            scheduler.add_job(func=my_job, id=id, trigger=trigger_type, day_of_week=day_of_week,
                              hour=hour, minute=minute,coalesce=False,
                              second=second, replace_existing=True)
            print("添加周期执行任务成功任务成功---[ %s ] " % id)
        return jsonify(msg="新增任务成功")

@aps.route('/cron/<task_id>/delete', methods=['GET', 'POST'])
def CrontabDelete(task_id):
    response = {'status': False}
    try:
        scheduler.remove_job(task_id)
        response['status'] = True
        response['msg'] = "job[%s] remove success!" % task_id
    except Exception as e:
        response['msg'] = str(e)
    return jsonify(response)


# 暂停功能
@aps.route('/cron/<task_id>/pause',methods=['GET','POST'])
def CrontabPasue(task_id):
    response={'status':False}
    try:
        scheduler.pause_job(task_id)
        response['status']=True
        response['msg']="job[%s] pause success!" % task_id
    except Exception as e:
        response['msg'] = str(e)
    return jsonify(response)
@aps.route('/cron/<task_id>/resume',methods=['GET','POST'])
def CrontabResume(task_id):
    response={'status':False}
    try:
        scheduler.resume_job(task_id)
        response['status'] = True
        response['msg'] = "job[%s] resume success!" % task_id
    except Exception as e:
        response['msg'] = str(e)
    return jsonify(response)

调用接口直接关停这个定时任务
在这里插入图片描述
新增任务
在这里插入图片描述
任务状态
在这里插入图片描述
可以看到上面的中止任务已经中止
重新启动这个任务
在这里插入图片描述
查询当前状态
在这里插入图片描述
删除任务
在这里插入图片描述
查看当前运行任务
在这里插入图片描述
功能已经验证完毕
如果
SCHEDULER_EXECUTORS = {
# ‘default’: ThreadPoolExecutor(2),
‘default’: ProcessPoolExecutor(5)
}
会开启多个进程

app的所有url注册的连接 Map([<Rule '/scheduler/jobs' (POST, OPTIONS) -> scheduler.add_job>,
 <Rule '/scheduler/jobs' (HEAD, OPTIONS, GET) -> scheduler.get_jobs>,
 <Rule '/cron/add' (POST, HEAD, OPTIONS, GET) -> aps.CrontabAdd>,
 <Rule '/scheduler' (HEAD, OPTIONS, GET) -> scheduler.get_scheduler_info>,
 <Rule '/index' (POST, HEAD, OPTIONS, GET) -> ac.index>,
 <Rule '/test' (HEAD, OPTIONS, GET) -> ac.test>,
 <Rule '/user' (DELETE, POST, PUT, HEAD, OPTIONS, GET) -> crm.userresource>,
 <Rule '/role' (DELETE, POST, PUT, HEAD, OPTIONS, GET) -> crm.roleresource>,
 <Rule '/scheduler/jobs/<job_id>/resume' (POST, OPTIONS) -> scheduler.resume_job>,
 <Rule '/scheduler/jobs/<job_id>/pause' (POST, OPTIONS) -> scheduler.pause_job>,
 <Rule '/scheduler/jobs/<job_id>/run' (POST, OPTIONS) -> scheduler.run_job>,
 <Rule '/scheduler/jobs/<job_id>' (HEAD, OPTIONS, GET) -> scheduler.get_job>,
 <Rule '/scheduler/jobs/<job_id>' (DELETE, OPTIONS) -> scheduler.delete_job>,
 <Rule '/scheduler/jobs/<job_id>' (OPTIONS, PATCH) -> scheduler.update_job>,
 <Rule '/cron/<task_id>/delete' (POST, HEAD, OPTIONS, GET) -> aps.CrontabDelete>,
 <Rule '/cron/<task_id>/resume' (POST, HEAD, OPTIONS, GET) -> aps.CrontabResume>,
 <Rule '/cron/<task_id>/pause' (POST, HEAD, OPTIONS, GET) -> aps.CrontabPasue>,
 <Rule '/static/<filename>' (HEAD, OPTIONS, GET) -> static>])
 * Serving Flask app "s1pro_flask" (lazy loading)
 * Environment: production
   WARNING: This is a development server. Do not use it in a production deployment.
   Use a production WSGI server instead.
 * Debug mode: off
 * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
app的所有url注册的连接 Map([<Rule '/cron/add' (GET, OPTIONS, HEAD, POST) -> aps.CrontabAdd>,
 <Rule '/index' (GET, OPTIONS, HEAD, POST) -> ac.index>,
 <Rule '/test' (GET, OPTIONS, HEAD) -> ac.test>,
 <Rule '/user' (OPTIONS, DELETE, GET, POST, PUT, HEAD) -> crm.userresource>,
 <Rule '/role' (OPTIONS, DELETE, GET, POST, PUT, HEAD) -> crm.roleresource>,
 <Rule '/cron/<task_id>/delete' (GET, OPTIONS, HEAD, POST) -> aps.CrontabDelete>,
 <Rule '/cron/<task_id>/resume' (GET, OPTIONS, HEAD, POST) -> aps.CrontabResume>,
 <Rule '/cron/<task_id>/pause' (GET, OPTIONS, HEAD, POST) -> aps.CrontabPasue>,
 <Rule '/static/<filename>' (GET, OPTIONS, HEAD) -> static>])app的所有url注册的连接 Map([<Rule '/cron/add' (OPTIONS, HEAD, GET, POST) -> aps.CrontabAdd>,
 <Rule '/index' (OPTIONS, HEAD, GET, POST) -> ac.index>,
 <Rule '/test' (OPTIONS, HEAD, GET) -> ac.test>,
 <Rule '/user' (OPTIONS, PUT, HEAD, DELETE, GET, POST) -> crm.userresource>,
 <Rule '/role' (OPTIONS, PUT, HEAD, DELETE, GET, POST) -> crm.roleresource>,
 <Rule '/cron/<task_id>/delete' (OPTIONS, HEAD, GET, POST) -> aps.CrontabDelete>,
 <Rule '/cron/<task_id>/resume' (OPTIONS, HEAD, GET, POST) -> aps.CrontabResume>,
 <Rule '/cron/<task_id>/pause' (OPTIONS, HEAD, GET, POST) -> aps.CrontabPasue>,
 <Rule '/static/<filename>' (OPTIONS, HEAD, GET) -> static>])

2022-06-08 17:18:06 12360 MainThread 6292
============开始休息10S=====================
2022-06-08 17:18:06 17872 MainThread 10512
============开始休息10S=====================
app的所有url注册的连接 Map([<Rule '/cron/add' (GET, HEAD, POST, OPTIONS) -> aps.CrontabAdd>,
 <Rule '/index' (GET, HEAD, POST, OPTIONS) -> ac.index>,
 <Rule '/test' (GET, HEAD, OPTIONS) -> ac.test>,
 <Rule '/user' (GET, OPTIONS, HEAD, DELETE, POST, PUT) -> crm.userresource>,
 <Rule '/role' (GET, OPTIONS, HEAD, DELETE, POST, PUT) -> crm.roleresource>,
 <Rule '/cron/<task_id>/delete' (GET, HEAD, POST, OPTIONS) -> aps.CrontabDelete>,
 <Rule '/cron/<task_id>/resume' (GET, HEAD, POST, OPTIONS) -> aps.CrontabResume>,
 <Rule '/cron/<task_id>/pause' (GET, HEAD, POST, OPTIONS) -> aps.CrontabPasue>,
 <Rule '/static/<filename>' (GET, HEAD, OPTIONS) -> static>])app的所有url注册的连接 Map([<Rule '/cron/add' (HEAD, GET, POST, OPTIONS) -> aps.CrontabAdd>,
 <Rule '/index' (HEAD, GET, POST, OPTIONS) -> ac.index>,
 <Rule '/test' (HEAD, GET, OPTIONS) -> ac.test>,
 <Rule '/user' (HEAD, GET, POST, OPTIONS, DELETE, PUT) -> crm.userresource>,
 <Rule '/role' (HEAD, GET, POST, OPTIONS, DELETE, PUT) -> crm.roleresource>,
 <Rule '/cron/<task_id>/delete' (HEAD, GET, POST, OPTIONS) -> aps.CrontabDelete>,
 <Rule '/cron/<task_id>/resume' (HEAD, GET, POST, OPTIONS) -> aps.CrontabResume>,
 <Rule '/cron/<task_id>/pause' (HEAD, GET, POST, OPTIONS) -> aps.CrontabPasue>,
 <Rule '/static/<filename>' (HEAD, GET, OPTIONS) -> static>])

app的所有url注册的连接 Map([<Rule '/cron/add' (OPTIONS, HEAD, POST, GET) -> aps.CrontabAdd>,
 <Rule '/index' (OPTIONS, HEAD, POST, GET) -> ac.index>,
 <Rule '/test' (OPTIONS, GET, HEAD) -> ac.test>,
 <Rule '/user' (OPTIONS, GET, HEAD, PUT, DELETE, POST) -> crm.userresource>,
 <Rule '/role' (OPTIONS, GET, HEAD, PUT, DELETE, POST) -> crm.roleresource>,
 <Rule '/cron/<task_id>/delete' (OPTIONS, HEAD, POST, GET) -> aps.CrontabDelete>,
 <Rule '/cron/<task_id>/resume' (OPTIONS, HEAD, POST, GET) -> aps.CrontabResume>,
 <Rule '/cron/<task_id>/pause' (OPTIONS, HEAD, POST, GET) -> aps.CrontabPasue>,
 <Rule '/static/<filename>' (OPTIONS, GET, HEAD) -> static>])
2022-06-08 17:18:08 12448 MainThread 2172
============开始休息10S=====================
2022-06-08 17:18:09 14692 MainThread 18400
============开始休息10S=====================
Execution of job "my_job (trigger: interval[0:00:05], next run at: 2022-06-08 17:18:13 CST)" skipped: maximum number of running instances reached (2)
Execution of job "my_job (trigger: interval[0:00:05], next run at: 2022-06-08 17:18:14 CST)" skipped: maximum number of running instances reached (2)
2022-06-08 17:18:16 17872 MainThread 10512
============继续休息10S=====================
2022-06-08 17:18:16 12360 MainThread 6292
============继续休息10S=====================
Execution of job "my_job (trigger: interval[0:00:05], next run at: 2022-06-08 17:18:18 CST)" skipped: maximum number of running instances reached (2)
2022-06-08 17:18:18 12448 MainThread 2172
============继续休息10S=====================
Execution of job "my_job (trigger: interval[0:00:05], next run at: 2022-06-08 17:18:19 CST)" skipped: maximum number of running instances reached (2)
2022-06-08 17:18:19 14692 MainThread 18400
============继续休息10S=====================
2022-06-08 17:18:22 7936 MainThread 6040
============开始休息10S=====================
Execution of job "my_job (trigger: interval[0:00:05], next run at: 2022-06-08 17:18:23 CST)" skipped: maximum number of running instances reached (2)
Execution of job "my_job (trigger: interval[0:00:05], next run at: 2022-06-08 17:18:24 CST)" skipped: maximum number of running instances reached (2)
2022-06-08 17:18:26 12360 MainThread 6292
2022-06-08 17:18:26 17872 MainThread 10512
============继续休息10S=====================
============继续休息10S=====================
Execution of job "my_job (trigger: interval[0:00:05], next run at: 2022-06-08 17:18:28 CST)" skipped: maximum number of running instances reached (2)
2022-06-08 17:18:28 12448 MainThread 2172
============继续休息10S=====================
Execution of job "my_job (trigger: interval[0:00:05], next run at: 2022-06-08 17:18:29 CST)" skipped: maximum number of running instances reached (2)
2022-06-08 17:18:29 14692 MainThread 18400
============继续休息10S=====================
2022-06-08 17:18:32 7936 MainThread 6040
============继续休息10S=====================
Execution of job "my_job (trigger: interval[0:00:05], next run at: 2022-06-08 17:18:33 CST)" skipped: maximum number of running instances reached (2)
Execution of job "my_job (trigger: interval[0:00:05], next run at: 2022-06-08 17:18:34 CST)" skipped: maximum number of running instances reached (2)
2022-06-08 17:18:36 17872 MainThread 10512
================休息结束=====================
2022-06-08 17:18:36 12360 MainThread 6292
2022-06-08 17:18:36 17872 MainThread 10512
============开始休息10S=====================
================休息结束=====================
2022-06-08 17:18:36 12360 MainThread 6292
============开始休息10S=====================
Execution of job "my_job (trigger: interval[0:00:05], next run at: 2022-06-08 17:18:38 CST)" skipped: maximum number of running instances reached (2)
2022-06-08 17:18:38 12448 MainThread 2172
================休息结束=====================
Execution of job "my_job (trigger: interval[0:00:05], next run at: 2022-06-08 17:18:39 CST)" skipped: maximum number of running instances reached (2)
2022-06-08 17:18:39 14692 MainThread 18400
================休息结束=====================
2022-06-08 17:18:42 7936 MainThread 6040
============继续休息10S=====================
2022-06-08 17:18:43 12448 MainThread 2172
============开始休息10S=====================
2022-06-08 17:18:44 14692 MainThread 18400
============开始休息10S=====================
2022-06-08 17:18:46 17872 MainThread 10512
============继续休息10S=====================
2022-06-08 17:18:46 12360 MainThread 6292
============继续休息10S=====================
Execution of job "my_job (trigger: interval[0:00:05], next run at: 2022-06-08 17:18:48 CST)" skipped: maximum number of running instances reached (2)
Execution of job "my_job (trigger: interval[0:00:05], next run at: 2022-06-08 17:18:49 CST)" skipped: maximum number of running instances reached (2)
2022-06-08 17:18:52 7936 MainThread 6040
================休息结束=====================
Execution of job "my_job (trigger: interval[0:00:05], next run at: 2022-06-08 17:18:53 CST)" skipped: maximum number of running instances reached (2)
2022-06-08 17:18:53 12448 MainThread 2172
============继续休息10S=====================
Execution of job "my_job (trigger: interval[0:00:05], next run at: 2022-06-08 17:18:54 CST)" skipped: maximum number of running instances reached (2)
2022-06-08 17:18:54 14692 MainThread 18400
============继续休息10S=====================
2022-06-08 17:18:56 17872 MainThread 10512
============继续休息10S=====================
2022-06-08 17:18:56 12360 MainThread 6292
============继续休息10S=====================
Process Process-5:

每一个的进程都不一样,一开始就开启了5个进程,由于每个job可以有两个最大实例,所以5个可以用完
多线程是一个进程ID多个线程ID开启,参考我的另一篇文章django-apscheduler文章里面有详细说明想要代码访问下面连接
https://download.csdn.net/download/laoli815/85585618

  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
### 回答1: Flask-APscheduler 是一个在 Flask 框架中使用 APscheduler 的扩展包。APscheduler 是一个 Python 中的定时任务框架。使用 Flask-APscheduler 可以在 Flask 项目中方便地使用定时任务。常见场景包括: - 定时执行爬虫程序 - 定时发送邮件 - 定时备份数据库 - 定时清理无用文件 等等。 ### 回答2: flask-apscheduler是一个用于在Flask应用中调度定时任务的扩展。它基于APScheduler库,并提供了一种简单而灵活的方式来安排和管理这些任务。flask-apscheduler的使用场景如下: 1. 定时任务调度:使用flask-apscheduler可以轻松地在Flask应用中安排和管理定时任务。无论是需要每天定时执行数据备份、生成报表,还是需要每小时定时清理临时文件等任务,都可以通过flask-apscheduler来实现。 2. 异步任务处理:通过flask-apscheduler,可以将一些耗时的任务转为后台异步执行,提高系统的响应性能。比如,可以使用flask-apscheduler来处理邮件发送、文件处理、数据分析等耗时任务,让主线程及时返回给用户响应,并将耗时任务交给flask-apscheduler来异步处理。 3. 定时消息推送:flask-apscheduler可以用于定时推送消息给用户。比如,在特定的时间点上,通过flask-apscheduler可以安排发送提醒消息给用户,如节假日祝福、活动通知等,提升用户的参与度和满意度。 4. 定时数据更新:使用flask-apscheduler可以定时更新数据库中的数据,保持数据的最新性。通过设定合适的定时任务,可以定期从外部数据源获取最新数据,或者利用一些算法对数据库中的数据进行更新和优化,保持系统的稳定性和性能。 总之,flask-apschedulerFlask应用中可以广泛应用于定时任务调度、异步任务处理、定时消息推送和定时数据更新等场景。它提供了简单且灵活的方式来管理这些任务,提高系统的性能和用户体验。 ### 回答3: flask-apscheduler是一个基于Flask框架的定时任务调度插件,用于在Flask应用程序中自动执行预定义的任务。它可以在特定的时间间隔内启动、暂停或终止任务。 flask-apscheduler的使用场景可以包括以下几个方面: 1. 定时任务执行:flask-apscheduler可以用于在指定的时间间隔内执行一些任务,比如定期清理数据库、定时发送邮件、定时生成报表等。通过预先定义的调度器和任务函数,可以很方便地配置和管理这些定时任务。 2. 后台处理:在一些需要长时间处理的业务场景中,可以使用flask-apscheduler将这些任务放入后台进行处理,而不会阻塞主线程。比如在用户提交一个表单后需要进行数据处理或发送通知,在后台执行任务可以提升用户的体验。 3. 数据同步:flask-apscheduler还可以用于调度不同数据库之间的数据同步任务。例如,将从一个数据库中提取的数据同步到另一个数据库中,可以使用定时任务来定期检查数据更新并执行同步操作。 4. 缓存刷新:如果应用程序中有一些需要定期刷新的缓存数据,可以使用flask-apscheduler来定时更新这些数据,确保数据的及时性和准确性。 总而言之,flask-apscheduler适用于需要定时执行任务、后台处理、数据同步和缓存刷新等场景。它提供了简单易用的API和配置方式,使得在Flask应用程序中实现定时任务调度变得非常方便。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值