python数据库连接池,从Python多处理访问MySQL连接池

在Python中,创建一个独立的数据库连接池并在多进程中共享是不推荐的,因为每个进程应该有自己的连接。文章指出,为每个DB类实例创建一个连接池会导致不必要的资源消耗。正确的做法是在初始化阶段全局创建一个连接池,并在每个进程中获取连接。示例代码展示了如何在多进程环境中正确使用连接池,从而限制并发连接的数量。
摘要由CSDN通过智能技术生成

I'm trying to set up a MySQL connection pool and have my worker processes access the already established pool instead of setting up a new connection each time.

I'm confused if I should pass the database cursor to each process, or if there's some other way to do this? Shouldn't MySql.connector do the pooling automatically? When I check my log files, many, many connections are opened and closed ... one for each process.

My code looks something like this:

PATH = "/tmp"

class DB(object):

def __init__(self):

connected = False

while not connected:

try:

cnxpool = mysql.connector.pooling.MySQLConnectionPool(pool_name = "pool1",

**config.dbconfig)

self.__cnx = cnxpool.get_connection()

except mysql.connector.errors.PoolError:

print("Sleeping.. (Pool Error)")

sleep(5)

except mysql.connector.errors.DatabaseError:

print("Sleeping.. (Database Error)")

sleep(5)

self.__cur = self.__cnx.cursor(cursor_class=MySQLCursorDict)

def execute(self, query):

return self.__cur.execute(query)

def isValidFile(self, name):

return True

def readfile(self, fname):

d = DB()

d.execute("""INSERT INTO users (first_name) VALUES ('michael')""")

def main():

queue = multiprocessing.Queue()

pool = multiprocessing.Pool(None, init, [queue])

for dirpath, dirnames, filenames in os.walk(PATH):

full_path_fnames = map(lambda fn: os.path.join(dirpath, fn),

filenames)

full_path_fnames = filter(is_valid_file, full_path_fnames)

pool.map(readFile, full_path_fnames)

if __name__ == '__main__':

sys.exit(main())

解决方案

First, you're creating a different connection pool for each instance of your DB class. The pools having the same name doesn't make them the same pool

It is not an error for multiple pools to have the same name. An application that must distinguish pools by their pool_name property should create each pool with a distinct name.

Besides that, sharing a database connection (or connection pool) between different processes would be a bad idea (and i highly doubt it would even work correctly), so each process using it's own connections is actually what you should aim for.

You could just initialize the pool in your init initializer as a global variable and use that instead.

Very simple example:

from multiprocessing import Pool

from mysql.connector.pooling import MySQLConnectionPool

from mysql.connector import connect

import os

pool = None

def init():

global pool

print("PID %d: initializing pool..." % os.getpid())

pool = MySQLConnectionPool(...)

def do_work(q):

con = pool.get_connection()

print("PID %d: using connection %s" % (os.getpid(), con))

c = con.cursor()

c.execute(q)

res = c.fetchall()

con.close()

return res

def main():

p = Pool(initializer=init)

for res in p.map(do_work, ['select * from test']*8):

print(res)

p.close()

p.join()

if __name__ == '__main__':

main()

Or just use a simple connection instead of a connection pool, as only one connection will be active in each process at a time anyway.

The number of concurrently used connections is implicitly limited by the size of the multiprocessing.Pool.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值