Django+cherey部署
1、老男孩参考:
http://www.cnblogs.com/alex3714/p/6351797.html
2、准备
1、数据库部署
redis部署:
https://blog.csdn.net/feifeiyechuan/article/details/104463237
2、安装必要的包
pip install celery
pip install django-celery-results # 使用django存储数据
pip install redis
pip install django_celery_beat # 安装调度器,调度器可以使用admin进行定时
3、安装数据库
python manage.py makemigrations # 这个可以不用,但是最好运行一下
python manage.py migrate
4、创建超级用户
python manage.py createsuperuser
5、登录admin会发现多了表格,具体介绍在上述老男孩链接最后一部分有展示
3、目录情况
4、settings设置
# djcelery.setup_loader()
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'visit_task',
'django_celery_results',
'django_celery_beat',
]
# celery
BROKER_URL = 'redis://:密码@ip地址:端口/0' # redis://:password@hostname:port/db_number
CELERY_RESULT_BACKEND = 'django-db' # 接收返回结果,保存至django数据库,但是install-app添加django_celery_results
CELERY_ACCEPT_CONTENT = ['application/json', ]
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = TIME_ZONE
CELERYD_PREFETCH_MULTIPLIER = 1 # worker每次取的任务数量
CELERYD_FORCE_EXECV = True # 非常重要,有些情况下可以防止死锁
# # 每个celery worker再运行完后就死掉
CELERYD_MAX_TASKS_PER_CHILD = 40
# CELERY_ALWAYS_EAGER = True # 如果开启,Celery便以eager模式运行, 则task便不需要加delay运行,别开啊,千万
5、celery设置
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery, platforms
from kombu import Exchange,Queue
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'follow_up_visit.settings')
app = Celery('follow_up_visit')
# Using a string here means the worker don't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
# should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings'
)
CELERY_TIMEZONE='Asia/Shanghai' #并没有北京时区,与下面TIME_ZONE应该一致
# BROKER_URL='amqp://guest:guest@localhost:15672//' #任何可用的redis都可以,不一定要在django server运行的主机上
# CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler'
# CELERY_RESULT_BACKEND = 'django-db'
# rabbitmq做MQ配置
# app = Celery('follow_up_visit', backend='amqp', broker='amqp://admin:admin@localhost')
# Load task modules from all registered Django app configs.
from django.conf import settings # noqa
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
# 允许root 用户运行celery
platforms.C_FORCE_ROOT = True
CELERYD_FORCE_EXECV = True # 非常重要,有些情况下可以防止死锁
@app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
# 定时任务
from celery.schedules import crontab, timedelta
app.conf.update(
CELERYBEAT_SCHEDULE={
# 'update_task': {
# 'task': 'visit_task.tasks.update_task',
# 'schedule': timedelta(seconds=10),
# 'args': ()
# },
# 'schedule_task': {
# 'task': 'visit_task.tasks.start_happyNewYear',
# 'schedule': crontab(month_of_year=1, day_of_month=24, hour=12, minute=5),
# 'args': ()
# }
}
)
# 定义celery各个队列的名称
# 参考:https://blog.csdn.net/qq_28295425/article/details/83998754
CELERY_QUEUES = (
Queue("update_task", Exchange("update_task"), routing_key="task_a"),
Queue("handle_lostVisit", Exchange("handle_lostVisit"), routing_key="task_b"),
Queue("lostvisit_contrab", Exchange("lostvisit_contrab"), routing_key="task_c"),
)
CELERY_ROUTES = {
"tasks.taskA": {"queue": "update_task", "routing_key": "task_a"},
"tasks.taskB": {"queue": "handle_lostVisit", "routing_key": "task_b"},
"tasks.taskC": {"queue": "lostvisit_contrab", "routing_key": "task_c"}
}
# 日志模块
import logging.config
LOG_CONFIG = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'simple': {
'format': '%(asctime)s \"%(pathname)s:%(module)s:%(funcName)s:%(lineno)d\" [%(levelname)s]- %(message)s'
}
},
'handlers': {
'celery': {
'level': 'INFO',
'formatter': 'simple',
'class': 'logging.handlers.RotatingFileHandler',
'filename': 'log/celery/celery.log',
'encoding': 'utf-8',
},
},
'loggers': {
'celery': {
'handlers': ['celery'],
'level': 'INFO',
'propagate': True,
}
}
}
logging.config.dictConfig(LOG_CONFIG)
# 日志调用
from celery.utils.log import get_task_logger
logger = get_task_logger('task') # 这个名称随便起
logger.info('Refresh task start and refresh success')
6、与settings同级的__init__.py设置
from __future__ import absolute_import, unicode_literals
import pymysql
# load mysql
pymysql.install_as_MySQLdb()
# load celery
# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from .celery import app as celery_app
__all__ = ['celery_app']
7、task.py
from celery import shared_task, task
8、调用
9、启动命令
conda activate py368
export PYTHONIOENCODING=utf-8
nohup python manage.py runserver 0.0.0.0:7068 & # 启动django服务
celery -A follow_up_visit worker -l info # 启动一个worker
# 关闭celery
ps auxww | grep 'celery' | awk '{print $2}' | xargs kill -9
# redis操作:
# 清空redis
redis-cli -a 123456
flushall
exit
# 其他
(1)
celery multi start w1 -A 项目名 --loglevel=INFO # w1,w2为指定worker的名称,一次也是启动给一个,但是不会有实时日志显示在shell中,可以实现分布式。生产商使用
(2)
启动beat调度器才能执行这个任务计划。django只是保存调度任务
celery -A 项目名 beat -l info -S django # -S 表示从哪儿读取调度任务(新增需要重启)
然后启动worker就可以了
eg:
celery multi restart w1 -A 项目名 -l info # 重启worker w1
celery multi stop w2 # 停止worker w2
如果一直不发调度任务,需要清空数据库,积压了很多任务先去执行
(3)关于返回结果
创建调用
from task import add
res = add.delay() # 执行异步任务
res.get(timeout=1) # 可以获取到return的执行结果,如果不写timeout会一直监测直到返回结果
res.get(propagate=False) # 不会报使程序报错的异常
res.ready() # 获取执行状态,是否准备好,是否完成
res.task_id # 获取id
from celery.result import AsyncResult
res = AsyncResult (task_id) # 通过task_id 获取结果
res,status # 任务状态 PENDING + SUCCESS
res.get() # 获取结果
res.trace_task # 追踪错误信息
# 其他的是用了rabbitmq数据库做存储:(不用了,太乱)
1、django+celery异步执行
操作步骤:https://ops-coffee.cn/s/lXrp3igYo9W2UuE5Gauysg
安装内容:
安装参考: https://blog.csdn.net/m0_37034294/article/details/82839494(win)
(1)erlang安装
https://www.erlang.org/downloads
(2)rabbitmq-server安装
https://www.rabbitmq.com/download.html
2、服务器部署流程
(1)、开启容器:
Django:
sudo docker run -it -p 7080:8000 --name='suifang_v1.0' --network sf_network --network-alias sf-network ubuntu_tfgpu_py36:v1.0 /bin/bash
mysql:
docker run -d -e MYSQL_ROOT_PASSWORD=123456 -p 7083:3306 --network sf_network --network-alias sf-network --name mysql_suifang_v1.0 mysql
进入容器,切换环境:
conda activate py368
https://www.jianshu.com/p/6dd13d39d613
(2)、拉取代码
(3)、安装celery
a、安装rabbitmq-server数据库服务(abbitmq做队列)(参考:RabbitMQ安装与初始配置https://www.cnblogs.com/chrischennx/p/7071471.html)
apt-get install rabbitmq-server
启动服务:
rabbitmq-server start 开启服务(只要这个)
rabbitmq-server stop 停止服务
rabbitmqctl status 查看状态
nohup rabbitmq-server start &
b、安装cherey
pip install celery
(4)、确认Django版本
pip install Django==2.0.6 -i https://pypi.douban.com/simple
(5)、安装包
pip install PyMySQL -i https://pypi.douban.com/simple
pip install websocket -i https://pypi.douban.com/simple
pip install websocket-client -i https://pypi.douban.com/simple(修改源码)
路径:/root/miniconda3/envs/py368/lib/python3.6/site-packages/websocket/84行 __init__中添加参数:, cur_message=None, send_message=None 并在该方法中添加: self.cur_message = cur_message self.send_message = send_message
pip install requests -i https://pypi.douban.com/simple
(6)、Cherery启动
启动worker等待任务
pip install eventlet 安装eventlet(防止报错)
celery -A follow_up_visit worker -l info -P eventlet(windows -P eventlet防止报错)
nohup celery -A follow_up_visit worker -l info &
启动beat任务调度
celery -A follow_up_visit beat -l info
nohup celery -A follow_up_visit beat -l info &
全部启动
celery -A 项目名 worker -b -l info
(7)启动django项目
nohup python manage.py runserver 0.0.0.0:8000 &
(8)运行总结:
cd /tmp/follow_up_visit
conda activate py368
nohup rabbitmq-server start &
nohup python manage.py runserver 0.0.0.0:8000 &
nohup celery -A follow_up_visit beat -l info &
nohup celery -A follow_up_visit worker -l info &
# 关闭celery
ps auxww | grep 'celery' | awk '{print $2}' | xargs kill -9