测试环境:Flask\WinCC
我这里同步了两条产线,每条产线800个点,6秒内共同步1600个点.
(实测情况下5秒内就可同步完,若是放到服务器上跑,性能会更快.
代码里有缺陷,cycle方法每6秒同步,这个是冒险行为,最好的方案是处理完点数后调用函数上传数据到数据库.
另在点数少的时候,是秒内同步数据.)
import time
from concurrent.futures import ThreadPoolExecutor
from opcua import Client
import pymongo
# 创建UA的客户端,包含IP地址和端口号
client = Client("opc.tcp://192.168.0.110:4862/")
# 建立到服务器连接
client.connect()
mongo_client = pymongo.MongoClient(host='127.0.0.1', port=21303)
mongo_db = mongo_client['test_databases']
def cycle(i, dicts_):
while True:
time.sleep(6)
if dicts_.get("_id") is not None:
dicts_.pop("_id")
mongo_db["line{0}".format(i)].insert_one(dicts_)
# 自定义回调函数
class SubHandler(object):
def __init__(self, dicts_):
self.dicts = dicts_
def datachange_notification(self, node, val, data):
self.dicts[str(node).split("|")[1].split("))")[0]] = val
def work(i):
dicts_ = dict()
handler = SubHandler(dicts_)
sub = client.create_subscription(6000, handler)
myvar = client.get_node(
"ns=1;s=f|@LOCALMACHINE::SIMATIC S7-1200, S7-1500 Channel\Group_%d" % i)
for item in myvar.get_children():
sub.subscribe_data_change(item.get_path())
cycle(i, dicts_)
if __name__ == "__main__":
executor = ThreadPoolExecutor(max_workers=10)
[executor.submit(work, i) for i in range(1, 10)]
python opcua 每6秒同步1600个点数据到mongodb
最新推荐文章于 2024-04-11 08:38:40 发布